article
stringlengths
507
295k
abstract
stringlengths
417
1.92k
category
listlengths
1
6
# 1 Introduction Real-world enterprise data, such as financial transactions, supply chain data, e-commerce records, product catalogs, customer interactions, and electronic health records, are predominantly stored in relational databases [8]. These databases typically consist of multiple tables, each dedicated to different entity types, interconnected through primary and foreign key links. This abstraction underpins large quantities of complex, dynamically updated data that scale with business volume, storing potentially immense, unexploited knowledge [13]. However, extracting predictive patterns from such data has traditionally depended on manual feature engineering within complex machine learning pipelines, requiring the transformation of multi-table records into flat feature vectors suitable for models like deep neural networks and decision trees [6]. Relational Deep Learning. To enable end-to-end deep learning, relational databases can be represented as relational entity graphs [13], where nodes correspond to entities and edges capture primary-foreign key relationships. This graph-based representation allows Graph Neural Networks (GNNs) to learn abstract features directly from the underlying data structure, effectively modeling complex dependencies for various downstream prediction tasks. With this setup, which is termed as Relational Deep Learning (RDL), GNNs reduce or eliminate the need for manual feature engineering and often lead to improved performance [42], at a fraction of the traditional model development cost. Figure 1: Overview of the RELGT architecture. First, the input relational entity graph (REG) is converted into tokens where each training seed node (such as the customer node in this example) gets a fixed number of neighboring nodes, which are encoded with a multi-element tokenization strategy. These tokens are then passed through a Transformer network that builds both local and global representations, which are then fed to downstream prediction layers. Existing gaps. Despite their effectiveness, standard message-passing GNN architectures [15, 29, 18, 48] have notable limitations, such as insufficient structural expressiveness [52, 37, 34] and restricted long-range modeling capabilities [1]. For example, consider an e-commerce database with three tables: customers, transactions, and products, which can be represented as a relational entity graph as in Figure 1. In a standard GNN, transactions are always two hops away from each other, connected only through shared customers. This creates an information bottleneck: transaction-to-transaction patterns require multiple layers of message passing, while product relationships remain ent3irely indirect in shallow networks. Furthermore, products would never directly interact in a two-layer GNN [42], as their messages must pass through both a transaction and a customer, highlighting the inherent structural constraints of GNN architectures that restrict capturing long-range dependencies. Graph Transformers (GTs) have emerged as more expressive models for graph learning, utilizing self-attention in the full graph to increase the range of information flow and additionally, incorporating positional and structural encodings (PEs/SEs) to better capture graph topology [9, 53, 41]. These advances have produced strong results across domains [38], including foundation models for molecular graphs [46]. However, many GT designs are limited to non-temporal, homogeneous, and small-scale graphs, assumptions that do not hold for relational entity graphs (REGs) [13], which are typically (i) heterogeneous, with different tables representing distinct node types; (ii) temporal, with entities often associated with timestamps and requiring careful handling to prevent data leakage; (iii) large-scale, containing millions or more records across multiple interconnected tables. In particular, existing PEs often require precomputation, depend on graph size, and typically do not scale well to large, heterogeneous, or dynamic graphs [3, 26]. For instance, node2vec [17], while more efficient than Laplacian or random walk PEs, can become prohibitively expensive and impractical to compute on massive graphs [40]. These limitations, along with the inability to capture the multi-dimensional complexity of relational structures, render current GTs inadequate for relational databases. Present work. We introduce the Relational Graph Transformer (RELGT), the first Graph Transformer specifically designed for relational entity graphs. RELGT addresses key gaps in existing methods by enabling effective graph representation learning within the RDL framework. It is a unified model that explicitly captures the temporality, heterogeneity, and structural complexity inherent to relational graphs. We summarize the architecture as follows (Figure 1): • Tokenization: We develop a multi-element tokenization scheme that converts each node into structurally enriched tokens. By sampling fixed-size subgraphs as local context windows and encoding each node’s features, type, hop distance, time, and local structure, RELGT captures fine-grained graph properties without expensive precomputation at the subgraph or graph level. • Attention: We develop a transformer network that combines local and global representations, adapting existing GT architectures [41]. The model extracts features from the local tokens while simultaneously attending to learnable global tokens that act as soft centroids, effectively balancing fine-grained structural modeling with database-wide patterns [30]. • Validation: We showcase RELGT’s effectiveness through a comprehensive evaluation on 21 tasks from RelBench [42]. RELGT consistently outperforms GNN baselines, with gains of up to $18 \%$ , establishing transformers as a powerful architecture for relational deep learning. Compared to HGT [20], a strong GT baseline for heterogeneous graphs, RELGT achieves better results without added computational cost, even when HGT uses Laplacian eigenvectors for positional encoding. # 2 Background # 2.1 Relational Deep Learning Relational Deep Learning (RDL) is an end-to-end representation learning framework that converts relational databases into graph structures, enabling neural networks to be applied directly and eliminating the need for manual feature extraction in multi-table data pipelines [13]. Definitions. Formally, we can define a relational database as the tuple $( T , R )$ comprising a collection of tables $T \dot { = } \{ T _ { 1 } , \dots , T _ { n } \}$ connected through inter-table relationships $R \subseteq T \times T$ . A link $( T _ { \mathrm { f k e y } } , T _ { \mathrm { p k e y } } ) \in R$ denotes a foreign key in one table referencing a primary key in another. Each table contains entities (rows) $\{ v _ { 1 } , \ldots , v _ { n _ { T } } \}$ , with each entity typically consisting of: (1) a unique identifier (primary key), (2) references to other entities (foreign keys), (3) entity-specific attributes, and (4) timestamp information indicating when the entity was created or modified. The structure of relational databases inherently forms a graph representation, called as relational entity graphs (REGs). An REG is formally defined as a heterogeneous temporal graph $G = ( V , E , \phi , \psi , \tau )$ , where nodes $V$ represent entities from the database tables, edges $E$ represent primary-foreign key relationships, $\phi$ maps nodes to their respective types based on source tables, $\psi$ assigns relation types to edges, and $\tau$ captures the temporal dimension through timestamps [13]. Challenges. Relational entity graphs exhibit three distinctive properties that set them apart from conventional graph data. First, their structure is fundamentally schema-defined, with topology shaped by primary-foreign key relationships rather than arbitrary connections, creating specific patterns of information flow that require specialized modeling approaches. Second, they incorporate temporal dynamics, as relational databases track events and interactions over time, necessitating techniques like time-aware neighbor sampling to prevent future information from leaking into past predictions. Third, they display multi-type heterogeneity, as different tables correspond to different entity types with diverse attribute schemas and data modalities, presenting challenges in creating unified representations that effectively integrate information across diverse node and edge types [44, 49]. These characteristics create both challenges and opportunities for GNN architectures, requiring models that can simultaneously address temporal evolution, heterogeneous information, and schema-constrained structures while processing potentially massive multi-table datasets. # 2.2 RDL Methods The baseline GNN approach introduced by [42] for RDL uses a heterogeneous GraphSAGE [18] model with temporal-aware neighbor sampling, which demonstrates significant improvements compared to traditional tabular methods like LightGBM [27] across all tasks in the RelBench benchmark. This baseline architecture leverages PyTorch Frame’s multi-modal feature encoders [19] to transform diverse entity attributes into initial feature embeddings that serve as input to the GNN. Several specialized architectures have been developed to address specific challenges in relational entity graphs. RelGNN [5] introduces composite message-passing with atomic routes to facilitate direct information exchange between neighbors of bridge and hub nodes, commonly found in relational structures. Similarly, ContextGNN [55] employs a hybrid approach, combining pair-wise and two-tower representations, specifically optimized for recommendation tasks in RelBench. Beyond pure GNN approaches, retrieval-augmented generation techniques [51] and hybrid tabularGNN methods [32] have also demonstrated comparable or slightly superior performance to the standard GNN baseline, while showing the use of LLMs [16] and inference speedups, respectively. These approaches confirm the effectiveness of graph, tabular, and LLM-based methods for downstream predictions in relational databases. However, these methods typically optimize specific aspects of the problem, failing to incorporate broader advances from GTs in general graph learning. # 2.3 Graph Transformers Graph Transformers extend the self-attention mechanism from sequence modeling [47] to graphstructured data, offering powerful alternatives to traditional GNNs [9]. These models typically restrict attention to local neighborhoods, functioning as message-passing networks with attention-based aggregation [24, 2], while positional encodings are developed based on Laplacian eigenvectors [10]. Subsequent Graph Transformers incorporate global attention mechanisms, allowing all nodes to attend to one another [53, 36, 31]. This moves beyond the local neighborhood limitations of standard GNNs [1], albeit at the cost of significantly increased computational complexity. Modern GT architectures have improved the aforementioned early works by creating effective structural encodings and ensuring scalability to medium and large-scale graphs. For structural expressiveness of the node tokens, several positional and structural encoding methods have been developed [12, 3, 33, 22, 26] to inject the input graph topology. For scalability, various strategies have emerged including hierarchical clustering that coarsens graphs [57, 59], sparse attention mechanisms that reduce computational cost [41, 45], and neighborhood sampling techniques for processing massive graphs [58, 4, 30, 11]. Models like GraphGPS [41] combine these advances through hybrid local-global designs that maintain Transformers’ global context advantages while ensuring practical efficiency when scaling to medium and large graph datasets. However, these approaches exhibit several key limitations: they are largely confined to static graphs, and lack mechanisms to handle multiple node and edge types. While specialized Transformers for heterogeneous graphs exist [20, 35, 59, 56], integrating them, alongside other aforementioned methods, into the RDL pipeline remains challenging. This is primarily because adapting positional encodings under precomputation constraints is difficult, compounded by the complexity of modeling large-scale, temporal, and heterogeneous relational entity graphs (REGs). # 3 RELGT: Relational Graph Transformer # 3.1 Tokenization Traditional Transformers in natural language processing represent text through tokens with two primary elements: (i) token identifiers (or features) that denotes the token from a vocabulary set and (ii) positional encodings that represent sequential structure [47]. For example, a token can correspond to a word and its positional encoding can correspond to its order in the input sentence. Similarly, Graph Transformers generally adapt this two-element representation to graphs, where nodes are tokens with features, and graph positional encodings provide structural information [9, 28, 41]. Although this two-element approach works well for homogeneous static graphs, it becomes computationally inefficient when trying to encode multiple aspects of graph structural information for REGs. In particular, capturing heterogeneity, temporality, and schema-defined structure (as defined in Section 2.1) through a single positional encoding scheme would either require complex, multi-stage encoding or result in significant information loss about the rich relational context. For instance, if we were to extend existing PEs for REGs, several practical challenges emerge: (i) standard Laplacian or random walk-based PEs would need significant modification to differentiate between multiple node types (e.g., customers vs. products vs. transactions), (ii) these encodings lack mechanisms to incorporate temporal dynamics critical for time-sensitive predictions (e.g., capturing that a user’s recent purchases are more relevant than older ones), and (iii) the scale of relational databases makes global PE computation in REGs prohibitively expensive. With millions of records across tables, precomputation would only be feasible on small subgraphs, resulting in incomplete structural context. # 3.1.1 Proposed Approach RELGT overcomes these limitations through a multi-element token representation approach, without any computational overhead concerning the dependency on the number of nodes in the input REG. Rather than trying to compress all structural information into a single positional encoding, we decompose the token representation into distinct elements that explicitly model different aspects of relational data. This decoupled design allows each component to capture a specific characteristic of REGs: node features represent entity attributes, node types encode table-based heterogeneity, hop distance preserves relative distances among nodes in a local context, time encodings capture temporal dynamics, and GNN-based positional encodings preserve local graph structure. Sampling and token elements. The tokenization process in RELGT converts a REG $G \ =$ $( V , E , \phi , \psi , \tau )$ into sets of tokens suitable for processing by the Transformer network. Specifically, as shown in Figure 2, for each training seed node $v _ { i } \in V$ , we first sample a fixed set of $K$ neighboring nodes $\boldsymbol { v } _ { j }$ from within 2 hops of the local neighborhood using temporal-aware sampling2, ensuring that only nodes with timestamps $\tau ( v _ { j } ) \leq \tau ( v _ { i } )$ are included to prevent temporal leakage. Each token in this set is represented by a 5-tuple: $( x _ { v _ { j } } , \dot { \phi } ( v _ { j } ) , p ( v _ { i } , v _ { j } ) , \tau \bar { ( v _ { j } ) } - \tau ( v _ { i } ) , \mathrm { G N N - P E } _ { v _ { j } } )$ , where, (i) node features $( x _ { v _ { j } } )$ denotes the raw features derived from entity attributes in the database, (ii) node type $( \phi ( v _ { j } ) )$ is a categorical identifier corresponding to the entity’s originating table, (iii) relative hop distance $( p ( v _ { i } , v _ { j } ) )$ captures the structural distance between the seed node $\boldsymbol { v } _ { i }$ and the neighbor node $v _ { j }$ , (iv) relative time $( \tau ( v _ { j } ) - \tau ( v _ { i } ) )$ represents the temporal difference between the neighbor and seed node, and (v) finally, subgraph based PE $( \mathrm { G N N - P E } _ { v _ { j } }$ ) provides a graph positional encoding for each node within the sampled subgraph, generated by applying a lightweight GNN to the subgraph’s adjacency matrix with random node feature initialization [43, 26]. Figure 2: The tokenization procedure. A temporal-aware subgraph sampling step extracts a fixed set of local tokens for each training seed node, denoted by the node in black. Each token incorporates its respective graph structure information, which are element-wise transformed to a common embedding space and combined to form the effective token representation to be fed to the Transformer network. Encoders. Each element in the 5-tuple is processed by a specialized encoder before being combined into the final token representation, as illustrated in Figure 2. 1. Node Feature Encoder. The node features $\boldsymbol { x } _ { v _ { j } }$ , representing the columnar attributes of the node $v _ { j }$ in REG (which corresponds to a table row in a database), are encoded into a $d$ -dimensional embedding. Each modality, such as numerical, categorical, multi-categorical, text, and image data, is encoded separately using modality-specific encoders following [19], and the resulting representations are then aggregated into a unified $d$ -dimensional embedding. $$ h _ { \mathrm { f e a t } } ( v _ { j } ) = \mathbf { M u l t i M o d a l E n c o d e r } ( x _ { v _ { j } } ) \in \mathbb { R } ^ { d } $$ where MultiModalEncoder $\left. { \cdot } \right.$ is unified feature encoder adapted from [19]. 2. Node Type Encoder. The node type encoding steps converts each table-specific entity type $\phi ( v _ { j } )$ to a $d$ -dimensional representation, incorporating the heterogeneous information from the input data. $$ h _ { \mathrm { t y p e } } ( v _ { j } ) = W _ { \mathrm { t y p e } } \cdot \mathrm { o n e h o t } ( \phi ( v _ { j } ) ) \in \mathbb { R } ^ { d } $$ where $\phi ( v _ { j } )$ is the node type of $v _ { j }$ , $W _ { \mathrm { t y p e } } \in \mathbb { R } ^ { d \times | T | }$ is the learnable weight matrix, $| T |$ is the number of node types, and onehot $( \cdot )$ is the one-hot encoding function. 3. Hop Encoder. The relative hop distance $p ( v _ { i } , v _ { j } )$ , that captures the structural proximity between the seed node $\boldsymbol { v } _ { i }$ and a neighbor node $v _ { j }$ , is encoded into a $d$ -dimensional embedding as: $$ h _ { \mathrm { h o p } } ( v _ { i } , v _ { j } ) = W _ { \mathrm { h o p } } \cdot \mathrm { o n e h o t } ( p ( v _ { i } , v _ { j } ) ) \in \mathbb { R } ^ { d } $$ with $p ( v _ { i } , v _ { j } )$ being the relative hop distance between seed node $v _ { i }$ and neighbor node $v _ { j }$ , and $W _ { \mathrm { h o p } } \in \mathbb { R } ^ { d \times h _ { \operatorname* { m a x } } }$ the learnable matrix mapping hop distances (up to $h _ { \mathrm { m a x } } )$ ). 4. Time Encoder. The time encoder linearly transforms the time difference $\tau ( v _ { j } ) - \tau ( v _ { i } )$ between a neighbor node $v _ { j }$ and the seed node $v _ { i }$ : $$ h _ { \mathrm { t i m e } } ( v _ { i } , v _ { j } ) = W _ { \mathrm { t i m e } } \cdot ( \tau ( v _ { j } ) - \tau ( v _ { i } ) ) \in \mathbb { R } ^ { d } $$ where $\tau ( v _ { j } ) - \tau ( v _ { i } )$ is the relative time difference, and $W _ { \mathrm { t i m e } } \in \mathbb { R } ^ { d \times 1 }$ are learnable parameters. Figure 3: The Transformer network which processes the input tokens by first building local representations using the local tokens, then incorporating global context by attending to centroids that are dynamically updated during training. The final node representations combine both local structural details and global database context, enabling effective prediction across downstream tasks. 5. Subgraph PE Encoder. Finally, for capturing local graph structure that can otherwise not be represented by other elements of the token, we apply a light-weight GNN to the subgraph. This GNN encoder effectively preserves important structural relationships, such as complex cycles and quasi-cliques between entities [25], as well as parent-child relationships (e.g., a product node within the local subgraph corresponding to specific transactions), and can be written as: $$ h _ { \mathrm { p e } } ( v _ { j } ) = \mathbf { G N N } ( A _ { \mathrm { l o c a l } } , Z _ { \mathrm { r a n d o m } } ) _ { j } \in \mathbb { R } ^ { d } $$ where $\mathrm { G N N } ( \cdot , \cdot ) _ { j }$ is a light-weight GNN applied to the local subgraph yielding the encoding for node $v _ { j }$ , $A _ { \mathrm { l o c a l } } \in \mathbb { R } ^ { K \times K }$ is the adjacency matrix of the sampled subgraph of $K$ nodes, and $Z _ { \mathrm { r a n d o m } } \in$ $\mathbb { R } ^ { K \times d _ { \mathrm { i n i t } } }$ are randomly initialized node features for the GNN (with $d _ { \mathrm { i n i t } }$ as the initial feature dimension). One key advantage of using random node features in this GNN encoder is that it breaks structural symmetries between the subgraph topology and node attributes, thereby increasing the expressive power of GNN layers [43]. However, a fixed random initialization would destroy permutation equivariance which is a critical property for generalization. To address this challenge, we resample $Z _ { \mathrm { r a n d o m } }$ independently at every training step. This ‘stochastic initialization’ approach can be viewed as a relaxed version of the learnable PE method described in [26], thus approximately preserving permutation equivariance while retaining the expressivity gains afforded by the randomization. At last, the effective token representation is formed by combining all encoded elements: $$ h _ { \mathrm { t o k e n } } ( v _ { j } ) = O \cdot [ h _ { \mathrm { f e a t } } ( v _ { j } ) | | h _ { \mathrm { t y p e } } ( v _ { j } ) | | h _ { \mathrm { h o p } } ( v _ { i } , v _ { j } ) | | h _ { \mathrm { t i m e } } ( v _ { i } , v _ { j } ) | | h _ { \mathrm { p e } } ( v _ { j } ) ] $$ where $| |$ denotes the concatenation of the individual encoder outputs, and $O \in \mathbb { R } ^ { 5 d \times d }$ is a learnable matrix to mix the embeddings. This multi-element approach provides a comprehensive token representation that explicitly captures node features, type information, structural position, temporal dynamics, and local topology without requiring expensive computation on the graph structure. # 3.2 Transformer Network The Transformer network in RELGT, shown in Figure 3, processes the tokenized relational entity graph using a combination of local and global attention mechanisms, following the successful design components used in modern GTs [41, 50, 30, 11]. Local module. The local attention mechanism allows each seed node to attend to its $K$ local tokens selected during tokenization, capturing the fine-grained relationships defined by the database schema. This mechanism is different from a GNN used in RDL [42] in two key aspects: self-attention is used as the message-passing scheme and the attention is all-pair, i.e., all nodes in the local $K$ set attend to each other. This is implemented using an $L$ layer Transformer network [47] and provides a broader structural coverage compared to a baseline GNN [42]. A practical application of this improvement can be seen in the e-commerce example introduced in Section 1, where the proposed full-attention mechanism can directly connect seemingly unrelated products by identifying relationships through shared transactions or customer behaviors. This capability enables the model to capture subtle associations, such as customers frequently purchasing unexpected combinations of items. The local node representation $h _ { \mathrm { l o c a l } } ( v _ { i } )$ is obtained as: $$ h _ { \mathrm { l o c a l } } ( v _ { i } ) = \mathrm { P o o l } ( \mathrm { F F N } ( \mathrm { A t t e n t i o n } ( v _ { i } , \{ v _ { j } \} _ { j = 1 } ^ { K } ) ) _ { L } ) $$ where, $L$ denotes the layers, FFN and Attention are standard components in a Transformer [47], and Pool denotes the aggregation of $\{ v _ { j } \} _ { j = 1 } ^ { K }$ and $\boldsymbol { v } _ { i }$ using a learnable linear combination. Global module. The global attention mechanism enables each seed node to attend to a set of $B$ global tokens representing centroids of all nodes in the graph, conceptually and is adapted from prior works [30, 11]. These centroids are updated during training using an Exponential Moving Average (EMA) K-Means algorithm applied to seed node features in each mini-batch, providing a broader contextual view beyond the local neighborhood. The global representation is formulated as: $$ h _ { \mathrm { g l o b a l } } ( v _ { i } ) = \mathrm { A t t e n t i o n } ( v _ { i } , \{ c _ { b } \} _ { b = 1 } ^ { B } ) $$ The final output representation of each node $v _ { i }$ is obtained by combining local and global embeddings: $$ h _ { \mathrm { o u t p u t } } ( v _ { i } ) = \mathrm { F F N } ( [ h _ { \mathrm { l o c a l } } ( v _ { i } ) | | h _ { \mathrm { g l o b a l } } ( v _ { i } ) ] ) $$ with FFN being a feed forward network. The components of the Transformer in all stages follow standard instantiations with normalization and residual connections. For downstream prediction, the combined representation of the seed node is passed through a taskspecific prediction head. The model is trained end-to-end using suitable task specific loss functions. By leveraging multi-element token representations within a hybrid local-global Transformer architecture, RELGT effectively addresses the challenges of heterogeneity, temporal dynamics, and schema-defined structures inherent in relational entity graphs. # 4 Experiments RELGT is evaluated on the recently introduced Relational Deep Learning Benchmark (RelBench) [42]. RelBench consists of 7 datasets from diverse relational database domains, including e-commerce, clinical records, social networks, and sports, among others. These datasets are curated from their respective source domains and consist a wide range of sizes, from 1.3K to 5.4M records in the training set for the prediction tasks, with a total of 47M training records. For each dataset, multiple predictive tasks are defined, such as predicting a user’s engagement with an advertisement within the next four days or determining whether a clinical trial will achieve its primary outcome within the next year. In total, RelBench has 30 tasks across the 7 datasets, covering entity classification, entity regression, and recommendation. For our evaluation, we focus on 21 tasks on entity classification and regression 3. # 4.1 Setup and Baselines We implement RELGT within the RDL pipeline [42] by replacing the original GNN component, while preserving the learning mechanisms, database loaders, and task evaluators. The model has between 10-20 million parameters, and we use a learning rate of $1 e - 4$ . For tasks with fewer than one million training nodes, we tune the number of layers $L \in { 1 , 4 , 8 }$ and use dropout rates of 0.3, 0.4, 0.5. For tasks with more than one million training nodes, we fix the number of layers to $L = 4$ due to compute budgets. For the sampling during the token preparation stage, we use $K = 3 0 0$ local neighbors and set $B = 4 0 9 6$ as the number of tokens for global centroids. For smaller datasets (under one million training nodes), we use a batch size of 256 to ensure sufficient training steps. For larger datasets, we use a batch size of 1024. We do not perform exhaustive hyperparameter tuning; rather, our goal is to showcase the benefits of using RELGT in place of GNNs within the RDL framework. As shown in our ablation of the multi-element tokenization and global module in RELGT (Table 2), and context size (Figure 4), careful tuning may further improve performance across different tasks. In addition to the HeteroGNN baseline used in RDL, we report results for two variants of the Heterogeneous Graph Transformer (HGT) [20] to highlight the advantages of RELGT over existing GT models. Notably, many GTs, such as GraphGPS [41], are not directly applicable to heterogeneous graphs. Therefore, we adopt HGT and an enhanced version, $\mathrm { H G T + P E }$ , which incorporates Laplacian positional encodings (LapPE). These positional encodings are computed on the sampled subgraphs rather than the entire graph. Additional implementation details are included in Appendix A.3. (a) MAE for entity regression. Lower is better (b) AUC for entity classification. Higher is better. Table 1: Test set results on the entity regression and classification tasks in RelBench. Best values are in bold. RDL: HeteroGNN baseline [42], HGT: Heterogeneous GT [20], PE: Laplacian Positional Encodings [10]. Relative gains are expressed as percentage improvement over RDL baseline. # 4.2 Results and Discussion RELGT improves over GNN in RDL. The experimental results in Tables 1a and 1b demonstrate that RELGT consistently matches or outperforms the standard GNN baseline used in RDL [42] across multiple datasets and tasks. We observe the largest improvements in rel-trial site-success $( 1 8 . 4 3 \% )$ , rel-avito ad-ctr $( 1 5 . 8 5 \% )$ , and rel-f1 driver-top3 $( 1 0 . 5 6 \% )$ , while on rel-stack user-badge, RELGT performs below the RDL baseline by a margin of $- 3 . 9 4 \%$ . For all other tasks, RELGT consistently improves or matches the performance of the baseline GNN. We attribute the overall performance improvement to two key factors: (i) the broader structural coverage enabled by RELGT’s attention mechanisms as described in Section 3.2, and (ii) the fine-grained encodings employed in our tokenization scheme, which are further studied as follows and presented in Table 2. Subgraph GNN PE is critical in RELGT. In Table 2, we highlight the importance of several components in RELGT by conducting ablation studies. We remove one component at a time while preserving all others, and report the relative performance drop compared to the full RELGT model. Our results show that removing the subgraph GNN (PE), which encodes local subgraph structure (Section 3.1), leads to consistent performance degradation across all tasks. This component proves critical for disambiguating parent-child relationships when full-attention is applied, thanks to the random node features initialization [43, 26]. For instance, without the GNN (PE), products belonging to specific transactions (Figure 1) cannot be effectively captured, even when other encodings remain. Global module can bring gains depending on the task. In the same Table 2, our results of removing the global attention to the learnable centroids (Section 3.2) reveal task-dependent patterns that align with the findings reported in [30, 11]. For some tasks, such as rel-trial site-success, removing the attention to the centroids tokens leads to a substantial performance drop $( - 1 9 . 0 8 \% )$ , indicating that the global database-wide context provides crucial information beyond the local neighborhood. However, for certain tasks such as rel-avito user-clicks, removing the global module actually improves performance $7 . 7 9 \%$ relative gain), suggesting that for some prediction targets, local information is sufficient, and the global context might introduce noise. These mixed results highlight the complementary nature of local and global information in relational graphs, with the latter being optional depending on the task. Ablation of other encodings. The remaining ablations in Table 2 reveal mixed results across different components. While removing explicit fine-grained encodings (node type, hop distance, and relative time) degrades performance on some tasks, it improves performance on others. For tasks with specific temporal dependencies (as detailed in the Appendix A.1), our current temporal encodings may inadvertently introduce noise. Similarly, for node type and hop distance encodings, their information might already be partially captured by other model components. Despite these variations, the full RELGT model still shows consistently superior results when averaged across all tasks. However, our findings suggest that RELGT’s performance could be further enhanced by careful tuning of these encoding components based on their task-specific importance. In particular, additional improvements can be achieved by incorporating more effective temporal encoding methods [7, 21, 23]. HGT, a GT baseline, underperforms with significant computational overhead. As shown in Tables 1a and 1b, HGT [20] underperforms compared to the HeteroGNN baseline of RDL [42] across Table 2: Relative drop $( \% )$ in performance in RELGT after removing a model component. Negative scores suggest the component is critical in RELGT, and vice-versa. Full results in Table 7. Runtime Comparison: HGT vs HGT+PE Ablation of Context Size K 3.0 HGT (Baseline) Additional time for PE K = 100 K = 300 2.5 2.33× 2.45× K = 500 3× 2.01× 2.03× 1.5 1.6 1.50× 1.54× 0.5 1 95 0.0 90 rel-amazon item- rel-trial study- rel-hm rel-amazon rel-stack user- item- user- rel-avito user- rel-avito user- rel-avito ad- rel-avito actdr- rel-avito user- rel-avito rel-event user- user- rel-trial site- rel-trial study- rel-hm rel-amazon item- userltv adverse churn churn engagement clicks visits ctr clicks visits ignore success outcome sales churn most tasks, with only two exceptions: rel-trial study-adverse and rel-event user-ignore. Notably, the integration of Laplacian eigenvectors as PEs in HGT improves performance in just 5 out of 21 tasks. Moreover, as illustrated in Figure 4, the computational overhead required for precomputing the Laplacian PEs substantially increases per-epoch runtime across various tasks. These empirical findings clearly reveal the difficulties of directly applying existing GT architectures to relational entity graphs, emphasizing the importance and need for our contributions with RELGT. Figure 4: Left: Epoch runtime comparison of HGT [20] and $\mathrm { H G T + P E }$ , with Laplacian PE (see Figure 5 for all tasks). The red portion shows the additional time consumed by the precomputation of Laplacian PE against the base HGT time (blue). Right: Ablation for different $K$ values as the local context size in RELGT. Results using $K = 3 0 0$ serve as the baseline ( $100 \%$ performance), with $K = 1 0 0$ and $K = 5 0 0$ runs measured as $\%$ of performance relative to $K = 3 0 0$ . Local context size $K .$ . In our main RELGT experiments, we set the local context size at 300 nodes (Section 3.1), however, we study its variability in Figure 4 for context sizes $K \in \{ 1 0 0 , 3 0 0 , 5 0 0 \}$ . Although $K = 3 0 0$ generally produces the best results, optimal values vary across specific tasks. For instance, rel-avito ad-ctr benefits from a larger context size, whereas rel-trial study-outcome achieves better performance with a smaller context window. These findings suggest that RELGT’s performance could be further enhanced by task-specific tuning of the context size, allowing for better model expressivity based on the structural characteristics of each dataset.
Relational Deep Learning (RDL) is a promising approach for building state-of-the-art predictive models on multi-table relational data by representing it as a heterogeneous temporal graph. However, commonly used Graph Neural Network models suffer from fundamental limitations in capturing complex structural patterns and long-range dependencies that are inherent in relational data. While Graph Transformers have emerged as powerful alternatives to GNNs on general graphs, applying them to relational entity graphs presents unique challenges: (i) Traditional positional encodings fail to generalize to massive, heterogeneous graphs; (ii) existing architectures cannot model the temporal dynamics and schema constraints of relational data; (iii) existing tokenization schemes lose critical structural information. Here we introduce the Relational Graph Transformer (RelGT), the first graph transformer architecture designed specifically for relational tables. RelGT employs a novel multi-element tokenization strategy that decomposes each node into five components (features, type, hop distance, time, and local structure), enabling efficient encoding of heterogeneity, temporality, and topology without expensive precomputation. Our architecture combines local attention over sampled subgraphs with global attention to learnable centroids, incorporating both local and database-wide representations. Across 21 tasks from the RelBench benchmark, RelGT consistently matches or outperforms GNN baselines by up to 18%, establishing Graph Transformers as a powerful architecture for Relational Deep Learning.
[ "cs.LG", "cs.AI", "cs.DB" ]
# 1 Introduction The intensive use of chatbots based on Large Language Models (LLMs) has been associated with the promotion of superficial learning habits and a decline in critical thinking skills in their users, particularly students (Gerlich, 2025; Schei et al., 2024). Motivated by this fact, rather than relying on LLMs to provide factual answers, there is an opportunity to leverage the sophisticated natural language understanding capabilities of LLMs to foster critical thinking by means of the generation of critical questions. This paper contributes to the CQs-Gen shared task of the 12th Workshop on Argument Mining, co-located with ACL 2025, which focuses on generating critical questions from debate interventions In sum, the main contributions of our work are four-fold: (1) We propose a Two-Step Framework for Critical Question Generation composed of a Questioner–Judge LLM architecture where the Questioner, $L L M _ { Q }$ , generates multiple candidate questions that are evaluated by the Judge, $L L M _ { J }$ , which selects the most relevant ones, improving quality through selection; (2) we perform an extensive empirical evaluation of several small (7B–14B), open-source LLMs, demonstrating their strong performance despite limited size and without fine-tuning; (3) we explore how integrating argumentation scheme theory into prompts —both selectively and systematically— impacts generation quality and diversity; and (4) we highlight the potential of the proposed method to support educational tools that can be deployed locally, pre LLM parameters Knowledge Data Learning paradigm Models: Llama 3.1 8B,Gemma 29B,Qwen 2.5 7B Gemma312B,Deepseek r114B,gpt 4o, BERT 自宁 Argumentation scheme Augmentation · Fine-tuning · Few shot Q1 Suesteds Q2 Debate The questioner Q3 The judge Q2 intervention generateNcritical Q4 pickthethree Q4 (essay) questions best questions Q5 QN serving privacy and reducing computational costs. Our system ranked first in the CQs-Gen 2025 shared task on critical question generation, validating the effectiveness of the proposed approach. # 2 Problem definition # 2.1 Dataset description The provided dataset (Figueras, 2025), $D$ shared train, is composed of: (1) $D = 1 8 9$ interventions during real political debate where each intervention consists of a short text of an average of $1 3 8 \pm 8 8 . 4$ words; (2) Their associated argumentative schemes, that is “stereotypical patterns of inference that capture common types of defeasible arguments, i.e., arguments that are plausible but open to rebuttal. Each scheme represents a form of reasoning with typical premises and a conclusion” (Walton et al., 2008).2 Most $( 6 2 . 4 \% )$ of the interventions are associated with a single argumentative scheme, although some may have up to six; (3) A set $\mathcal { R } ^ { j }$ consisting of $N ^ { j }$ annotated reference questions for each debate intervention $j$ , where $\textit { j } = \boldsymbol { 1 } \ldots \boldsymbol { D }$ . Each reference question $q _ { i } ^ { j }$ is labeled with a label or category $\hat { l } _ { i } ^ { j } \ \in$ Useful, Unhelpful, Invalid , such that $\begin{array} { r l } { \mathcal { R } ^ { j } } & { { } = } \end{array}$ $\{ ( q _ { i } ^ { j } , l _ { i } ^ { j } ) \ | \ i = 1 , \ldots , N ^ { j } \}$ . Useful questions can potentially challenge one of the arguments in the text; Unhelpful questions are valid but unlikely to challenge any of the arguments in the text; and Invalid questions cannot be used to challenge any argument in the intervention (Figueras et al., 2025). # 2.2 Task description The task consists of automatically generating three Useful critical questions, $Q c ^ { j } \ = \ \{ q c _ { 1 } ^ { j } , q c _ { 2 } ^ { \breve { j } } , q c _ { 3 } ^ { j } \}$ for each debate intervention $j$ . In this context, critical questions are designed to evaluate the strength of an argument by revealing the assumptions underlying its premises (Figueras and Agerri, 2024). The usefulness of each generated critical question $q c _ { i } ^ { j }$ is evaluated by measuring its cosine similarity with the annotated reference questions $\mathcal { R } ^ { j }$ . The label assigned to $q c _ { i } ^ { j }$ corresponds to the label of the most similar reference question, provided that it is larger than or equal to 0.6. If no similarity score exceeds this threshold, the question is marked as Not able to evaluate. In this case, human evaluators assessed the usefulness of the question during the competition. The final score was computed on the 34 interventions that composed the test set, $D$ shared test. Note that the reference test set, with the labels corresponding to the interventions in $D$ shared test was not made available. # 3 Methodology As illustrated in Figure 1, the proposed system consists of two large language models (LLMs) used sequentially. (1) The Questioner $( L L M _ { Q } )$ which generates candidate critical questions given an intervention and its associated argumentation schemes; and the (2) The Judge $( L L M _ { J } )$ which evaluates these candidates and selects those deemed most useful (Li et al., 2024). This architecture is grounded in the framework of critical thinking proposed by Elder and Paul (2020), which comprises analytic, creative, and evaluative dimensions. We operationalize the creative components through $L L M _ { Q }$ (generation), and the analytic and evaluative components through $L L M _ { J }$ (selection). # 3.1 The prompts The prompts provided to the LLMs include: the intervention text, the role of the LLM (i.e., Questioner or Judge), definitions of critical question and argumentation scheme, the argumentative schemes present in the intervention along with their definitions (see A.3) and corresponding question templates (see A.4), the task objective, and the expected output. For more details, see A.2.3. For $L L M _ { Q }$ , each prompt is designed to elicit $N$ questions in a single generation step, rather than prompting the model $N$ times for one question at a time. This strategy effectively reduces question repetition. Aligned with Guo et al. (2023), we hypothesize that candidate questions exhibiting high similarity are likely to be useful. Thus, the following instruction is added to $L L M _ { J }$ ’s prompt: If some questions are redundant, these questions must be important: select the most relevant one. This modification led to an overall improvement in performance. # 3.2 Experimental design We split $D _ { \mathrm { s h a r e d } \mathrm { t r a i n } }$ into training ( $D _ { \mathrm { t r a i n } }$ , 74), validation $( D _ { \mathrm { v a l } } , 3 3 )$ , and test $( D _ { \mathrm { t e s t } } , 7 9 )$ sets. The size of $D _ { \mathrm { t e s t } }$ was selected to ensure stable results under the automatic evaluation metric (see A.2.1). We conducted experiments on $D _ { \mathrm { t e s t } }$ by varying the following parameters to assess their impact on performance: the choice of LLM for each of the roles (Questioner and Judge), the number of candidate questions generated, and the temperature setting of the LLMs. Additionally, we performed an ablation study to evaluate the role of argumentation schemes in the generation process and address the added value of $L L M _ { J }$ by comparing it with alternative question selection strategies. For more details on the experimental setup and further experiments, including LLM and BERT fine-tuning and data augmentation, see A.2 and A.4. # 4 Experiments and results # 4.1 Model comparison We evaluated both $L L M _ { Q }$ and $L L M _ { J }$ using a selection of small, open-source LLMs ranging from 7B to 14B parameters: Qwen 2.5 7B (Yang et al., 2024), Llama 3.1 8B (Dubey et al., 2024), Gemma 2 9B (Team et al., 2024), Gemma 3 12B (Team et al., 2025), and DeepSeek R1 14B (Guo et al., 2025) 3. We compare their performance with that of GPT-4o (Achiam et al., 2023). As shown in Table 1, the LLM combination yielding the highest proportion of useful outputs on $D _ { \mathrm { t e s t } }$ is Llama 3.1 8B as $L L M _ { Q }$ and Gemma 2 9B as $L L M _ { J }$ . Table 1: Performance on $D _ { \mathbf { t e s t } }$ for a selection of $L L M _ { Q }$ and $L L M _ { J }$ . Use, $I n \nu$ and NoEval are the $\%$ of Useful, Invalid, and Not able to evaluate questions, respectively. $L L M _ { Q }$ generates 8 questions of which $L L M _ { J }$ selects the best 3. The argumentative schemes are not given in the prompt. Best results in bold. # 4.2 Leveraging argumentation schemes To assess the impact on performance of adding argumentation scheme theory in the prompts for both $L L M _ { Q }$ and $L L M _ { J }$ , we conducted an ablation study. Table 2 compares the performance of $L L M _ { Q }$ (Llama 3.1 8B, generating six questions) and $L L M _ { J }$ (Gemma 2 9B) with the following configurations: (1) Without: No argumentation scheme is provided; (2) With (one): All argumentation schemes relevant to the given intervention are included in a single prompt; (3) With (mult.): Each argumentation scheme is provided in a separate prompt; and (4) $\mathbf { B o t h } : L L M _ { Q }$ is prompted independently using the With (one) and without argumentation schemes setups. Then the two sets of candidate questions are merged for their selection by $L L M _ { J }$ . Similarly to previous work (Figueras and Agerri, 2024), the best performance is achieved in the Both configuration, suggesting that combining scheme-based and non-scheme-based prompts yields the most effective results. Note that $8 1 \%$ of the questions selected by $L L M _ { J }$ were generated with the argumentation scheme in the prompt. Table 2: Performance on $D _ { \mathbf { t e s t } }$ with different argumentation schemes setups. $L L M _ { Q }$ : Llama 3.1 generating 6 questions. Without: No argumentation scheme is provided;With (one): Argumentation schemes are included in a single prompt; With (mult.): Each argumentation scheme is provided in a separate prompt; and Both: $L L M _ { Q }$ is prompted independently with and without argumentation schemes. Best results in bold. # 4.3 Number of candidate questions Table 3 presents the effectiveness of the questions as a function of the number of candidate questions generated per prompt. The experiment uses Llama $3 . 1 8 \mathbf { B }$ as $L L M _ { Q }$ —prompted both with and without the schemes—and Gemma 2 9B as $L L M _ { J }$ . Generating four candidate questions per prompt (eight in total) yielded the best performance. Table 3: Performance on DShared_train as a function of the number of candidate questions generated. $L L M _ { Q }$ : Llama 3.1, $L L M _ { J }$ : Gemma 2. 3 runs. # 4.4 Added value of the Judge, $L L M _ { J }$ Although we observe an improvement in performance when adding $L L M _ { J }$ versus a random selection (see Tables 2 and 3), the results are not directly comparable, as the average usefulness is computed over different numbers of questions ( $N$ for $L L M _ { Q }$ and three for $L L M _ { J }$ ). To further assess the effectiveness of $L L M _ { J }$ , we compared it against alternative selection paradigms. Table 4 reports the performance $L L M _ { J }$ versus a selection by an oracle and randomly, using Llama 3.1 8B as $L L M _ { Q }$ with four candidate questions per prompt. The oracle selects up to three useful questions. If fewer than three Useful questions are available, the remaining slots are filled by Unhelpful questions. If still insufficient, Invalid, and then Not able to evaluate questions are considered, in that order. The oracle illustrates the upper bound of the Judge’s potential performance. Results show that $L L M _ { J }$ achieves a usefulness rate that is 3.4 percentage points higher than random selection, a statistically significant improvement $( p < 0 . 0 5$ , McNemar’s test). As expected, the oracle yields the highest usefulness with a gain of 34.2 percentage points. Table 4: Performance on $D _ { \mathbf { S h a r e d t r a i n } }$ depending on the method to select the questions. Comparison between random selection, selection with Gemma 2 as $L L M _ { J }$ or with an Oracle. In all cases, $L L M _ { Q }$ is Llama 3.1 generating $4 + 4$ questions. 3 runs. # 4.5 Final submission Based on the results of the previous experiments, we selected the following setup for our final submission: $L L M _ { Q }$ , Llama 3.1 8B, generating four questions without the scheme and four with the scheme, all within a single prompt; $L L M _ { J }$ , Gemma 2 9B, selecting the three best questions, used without fine-tuning. For comparison, we maintained the same experimental setup but substituted $L L M _ { J }$ with GPT-4o in our second submission, and in the third submission, GPT-4o was used for both $L L M _ { Q }$ and $L L M _ { J }$ under identical prompting conditions. Table 5 shows the performance with the automated evaluation on $D _ { \mathrm { S h a r e d \ t r a i n } }$ and $D$ Shared test for the three final submissions. Table 5: Performance with the automated evaluation on the validation set and the test set for the three final submissions. Bold indicates the winning submission. After the manual annotation of the questions by the organizers, the score of the best performing submission rose to 67.6, ranking first in the task.
The widespread adoption of chat interfaces based on Large Language Models (LLMs) raises concerns about promoting superficial learning and undermining the development of critical thinking skills. Instead of relying on LLMs purely for retrieving factual information, this work explores their potential to foster deeper reasoning by generating critical questions that challenge unsupported or vague claims in debate interventions. This study is part of a shared task of the 12th Workshop on Argument Mining, co-located with ACL 2025, focused on automatic critical question generation. We propose a two-step framework involving two small-scale open source language models: a Questioner that generates multiple candidate questions and a Judge that selects the most relevant ones. Our system ranked first in the shared task competition, demonstrating the potential of the proposed LLM-based approach to encourage critical engagement with argumentative texts.
[ "cs.CL", "cs.HC" ]
# 1 Introduction Event-based cameras [4] are being actively explored in traffic monitoring applications due to their unique ability to capture fast-moving objects with low latency, high temporal resolution, and energy efficiency [22]. Unlike conventional frame-based cameras that capture scenes at fixed intervals, event cameras asynchronously record changes in pixel intensity, producing a continuous stream of events, making them well-suited for real-time object detection in complex, high-speed environments such as urban intersections, where timely and accurate perception is critical for tasks like vehicle tracking, pedestrian safety, and adaptive traffic management [7, 3]. Despite these advantages, the development of robust event-based object detection models for traffic scenarios is significantly constrained by the scarcity of annotated real-world event datasets. Labeling event data is particularly challenging, often requiring synchronized recordings from both event and frame-based cameras and subsequent projection of annotations, an effort-intensive and error-prone process. As a result, large-scale training and evaluation of machine learning models for event-based perception remain limited in practice. Fig. 1. Examples of traffic objects captured using event-based vision. (Left, black background) Synthetic events generated by CARLA’s DVS module. (Right, gray background) Real-world events from the eTram dataset [23], recorded using a physical event camera (Prophesee’s EVK 4 HD). To mitigate this challenge, researchers have proposed a number of event camera simulators capable of generating synthetic event data [2, 6, 9, 10, 14–16]. Among them, CARLA [6] stands out as a widely used open-source driving simulator that provides a high-fidelity, controllable environment for traffic scenario modeling. Importantly, CARLA includes a built-in dynamic vision sensor (DVS) module that emulates the output of real event cameras. This makes it uniquely suited for our study, as it not only supports complex, multi-agent traffic interactions but also enables event-based data collection in diverse lighting, weather, and traffic conditions. CARLA was chosen for this work because it offers a rare combination of realistic traffic simulation and native support for event camera emulation, making it an ideal platform for evaluating synthetic data in traffic monitoring contexts [1]. Fig. 1 shows examples of traffic objects captured using synthetic event data from CARLA and real-world data from the eTram [23] dataset, both of which are used in this study to systematically assess the simto-real gap in event-based object detection. In this study, we evaluate the realism and reliability of CARLA’s DVSgenerated event data for object detection tasks in traffic environments. Specifically, we train a recurrent vision transformer (RVT) [8], a state-of-the-art model tailored for event-based vision, exclusively on synthetic data produced by CARLA and evaluate its performance on varying combinations of synthetic and realworld test data. The RVT was selected for its strong performance on sparse spatiotemporal data and its architectural suitability for handling asynchronous event streams, enabling a rigorous test of the data’s generalizability. Through this sim-to-real evaluation, we aim to determine whether CARLA’s synthetic event data can serve as an effective proxy for real-world conditions or if substantial performance discrepancies persist. To our knowledge, this is the first study to provide a quantification of the sim-to-real gap using CARLA’s native DVS module in a traffic object detection setting. We summarize the key contributions of the paper below: – Synthetic Event-Based Vision Dataset for Traffic Monitoring: We offer a dataset captured from a fixed perception setup at multiple intersection scenarios. It includes diverse conditions spanning various weather patterns, lighting environments, and traffic densities, along with detailed annotations of traffic objects, enabling realistic benchmarking and training for eventbased perception models. – Evaluation of CARLA’s DVS for Synthetic Data Generation: We assess the quality and applicability of CARLA’s synthetic event data in the context of traffic monitoring, specifically focusing on object detection performance. – Sim-to-Real Gap Analysis and Benchmarking: We quantify the domain gap between synthetic and real-world event data by evaluating model performance across both domains. Our findings highlight the limitations of current DVS simulation fidelity in CARLA and establish a baseline for future research on domain adaptation and transfer learning in neuromorphic vision for traffic monitoring. # 2 Related Study This section reviews the foundational principles of event cameras, event camera simulators developed to mitigate the scarcity of annotated real-world data, and outlines the rationale for selecting CARLA’s DVS as a synthetic data source for traffic object detection in this study. # 2.1 Event Cameras Event cameras are bio-inspired vision sensors that detect per-pixel changes in brightness asynchronously. Unlike conventional frame-based cameras that capture full images at fixed intervals, event cameras emit a continuous stream of events whenever the logarithmic intensity change at a pixel exceeds a threshold. Each event is encoded as a tuple $\langle x , y , t , p \rangle$ , representing the pixel coordinates $( x , y )$ , timestamp $t$ , and polarity $p$ , which indicates whether the intensity increased or decreased [4, 7]. Fig. 2. Off-the-shelf event cameras. (a) Prophesee family of event cameras [18], (left to right): Century Arks SilkyEVCam, OpenMV GENX320 Camera Module, IDS uEye XCP, and Lucid Triton2 EVS. (b) IniVation family of event cameras [11], (left to right): DVXplorer, DVXplorer Micro, DAVIS346, and DVXplorer Lite. This sensing paradigm offers several advantages over conventional RGB cameras, including high dynamic range, negligible motion blur, microsecond-level temporal resolution, and extremely low power consumption. These characteristics make event cameras particularly well-suited for real-time perception in high-speed, dynamic, and low-light environments. As a result, their adoption is expanding across a wide range of domains, including robotics, autonomous driving, augmented and virtual reality, mobile and wearable devices, Internet of Things, medical imaging, positioning and navigation systems, $3 D$ scanning and surface profiling, as well as defense and surveillance applications [19, 12]. Commercially available event cameras, such as Prophesee’s Metavision sensors [18] (See Fig. 2(a)) and the iniVation DAVIS series [11] (See Fig. 2(b)), have made the technology more accessible. However, the research community is still in its early stages of adopting event cameras across application domains. A key challenge remains the limited availability of annotated real-world event datasets, which hinders the development and benchmarking of robust event-based perception models. # 2.2 Event Camera Simulators The lack of large-scale, annotated real-world event datasets poses a significant challenge in developing robust event-based vision models. To overcome this, various event camera simulators have been developed to generate synthetic event streams from either rendered $3 D$ scenes or conventional video inputs. These simulators serve as valuable tools for prototyping and benchmarking event-based algorithms across domains such as object detection, tracking, SLAM, and neuromorphic benchmarking. Event camera simulators differ in their underlying methodologies, input data types, and levels of realism. Some simulators, like ESIM [21], employ adaptive sampling over rendered $3 D$ scenes, producing accurate event timing based on photometric changes and including inertial and ground-truth data. Others, such as v2e [10], synthesize events from frame-based video using a learned DVS pixel model. Blender-based tools like the DAVIS Simulator [16] and V2CE [24] simulate camera trajectories over $3 D$ environments, providing dense annotations such as depth and camera calibration. Table 1. Overview of Event Camera Simulators Recent simulators like DVS-Voltmeter [15] and PECS [9] incorporate physical and circuit-level noise models, improving realism for downstream learning tasks. Similarly, CARLA’s built-in DVS [6] module (detailed in Section 2.3) enables event generation in photorealistic, interactive driving environments, uniquely supporting traffic-focused studies. A summary of key simulators is presented in Table 1. These tools differ in terms of fidelity, accessibility, and flexibility. As the field advances, a key research direction involves bridging the realism gap between synthetic and real-world events, making simulators increasingly important for pretraining and evaluation in low-data settings. # 2.3 CARLA DVS The CARLA simulator [6] provides native support for event-based vision through its sensor.camera.dvs blueprint, which emulates a dynamic vision sensor. Unlike conventional cameras, DVS asynchronously capture changes in brightness at each pixel, outputting a stream of events encoded as $e = \langle x , y , t , p \rangle$ , where $( x , y )$ is the pixel location, $t$ is the timestamp, and $p$ is the polarity ( $+ 1$ or -1). Events are triggered when the change in logarithmic intensity at a pixel exceeds a predefined threshold: $$ L ( x , y , t ) - L ( x , y , t - \delta t ) = p \cdot C $$ where $C$ is the contrast threshold. The CARLA’s DVS mimics this behavior by sampling differences between successive frames at high frequency, generating more events under fast motion or dynamic lighting. To approximate the microsecond temporal resolution of real event cameras, the DVS must run at higher frequencies than standard sensors, balancing temporal fidelity and computational cost. The DVS camera in CARLA supports several configurable parameters, including contrast thresholds, refractory periods, and noise models. The event stream can be accessed using Python APIs and visualized as event frames, where positive and negative events are typically rendered in blue and red, respectively (See Fig. 3). While the sensor inherits all RGB camera attributes, it also includes unique DVS-specific settings such as use_log and noise controls. Fig. 3. Event frames (top row) generated by CARLA’s DVS and their corresponding RGB frames (bottom row), shown across different viewpoints with traffic objects. CARLA’s DVS enables scalable, controlled generation of event data across diverse traffic conditions. Its tight integration with a high-fidelity simulation environment makes it particularly valuable for evaluating event-based perception models and studying the sim-to-real transfer in traffic monitoring scenarios. # 3 The Synthetic Event-based Traffic Monitoring Dataset To systematically evaluate the sim-to-real gap in event-based traffic object detection, we curated a synthetic dataset using the DVS module in the CARLA simulator. This dataset, termed SeTraM (Synthetic event-based Traffic Monitoring), captures dynamic traffic scenarios across multiple urban intersections from a fixed overhead perspective. To enable fair and meaningful comparisons, we selected the real-world eTraM dataset [23], which similarly records event-based traffic activity using a fixed perception setup under comparable environmental conditions. This section introduces SeTraM, describing its data generation pipeline, statistics, and the alignment strategy adopted to ensure compatibility with eTraM for sim-to-real performance analysis. # 3.1 SeTraM Description The dataset used in this study is organized into seven groups, each comprising a fixed-length combination of synthetic and real-world data. The synthetic portion, derived from the SeTraM dataset, was generated using the DVS module in the CARLA simulator and captures traffic activity from four distinct urban intersections, viewed from a fixed overhead perspective. Each group includes event sequences designed to simulate diverse real-world conditions by varying environmental factors such as lighting and traffic density (See Fig. 4). Fig. 4. Snapshots of SeTram dataset showing daytime, nighttime and annotation instances. SeTraM consists of five daytime and two nighttime sequences, each lasting approximately 80 seconds, for a total of 38 minutes of event data. To emulate realistic traffic dynamics, up to 100 vehicles and 40 pedestrians were deployed in each simulation. All sequences were annotated following the $1 M P X$ format, with normalized bounding boxes and two object classes: pedestrian and vehicle. The dataset was structured to support controlled sim-to-real experiments. All training sets maintain the same total duration but differ in the proportion of synthetic $( S e T r a M )$ and real (eTram) data to assess model performance across varying domain mixes. Each training group comprises four sequences (333 seconds total). For validation, a consistent 50/50 split between synthetic and real data was used. The test set includes two variants of eTram data: one entirely from daytime recordings, and another combining both day and night scenes, providing a robust basis for generalization evaluation. # 3.2 Data Generation Pipeline The SeTraM was generated through a structured pipeline designed to replicate realistic urban traffic environments. The overall procedure is summarized below (See Fig. 5): Fig. 5. Data generation pipeline for the training sets from the CARLA simulator at four-way intersections and the real-world eTram dataset – Traffic Simulation: Using CARLA’s Python API, dynamic urban scenarios were constructed with moving vehicles and pedestrians. Four intersection layouts were selected, and traffic behavior was randomized across simulations to ensure diversity. – Sensor Deployment: A fixed spectator was placed at the corner of each intersection, and both RGB and DVS sensors were attached to it with predefined transformations and orientations. Sensor parameters such as resolution (set to $1 2 8 0 \times 7 2 0$ ), polarity threshold, and day/night conditions were configured as needed. – Data Logging: The simulator was executed in synchronous mode, advancing via world.tick() at each frame. Two queues were initialized to record RGB frames and DVS timestamps, which were later used for synchronization and annotation. – Bounding Box Processing: During each simulation tick, bounding boxes were computed for actors (vehicles and pedestrians) located within a specified radius from the spectator. Only actors with velocity above $0 . 1 ~ \mathrm { m / s }$ were included in DVS annotations to ensure relevance, as stationary objects do not trigger events. Bounding boxes for RGB frames were stored in text files, and DVS bounding boxes were aligned using a timestamp synchronization mechanism. Specifically, CARLA’s DVS outputs events asynchronously in an array format via listen(), and all events within a frame’s delta time interval were grouped and indexed using the starting timestamp. – Data Storage and Formatting: DVS event streams were initially saved as .csv files and subsequently converted to .npy and .h5 formats for compatibility with the RVT model. To ensure consistency with the eTram dataset and RVT pipeline, all annotations and event data were structured according to the 1MPX [17] format. Bounding boxes followed a $3 0 H z$ appearance rate, with annotations consisting of seven fields: timestamp, x, y, w, $h$ , class_id, and class_confidence [17]. The entire data generation process was controlled using a custom script, generate_traffic.py, which allowed users to define simulation parameters such as duration, intersection layout, and number of actors. Upon completion, RGB frames and two CSV files (output.csv for events and bbox.csv for bounding boxes) were generated. These files were then processed using a custom conversion script, rvtdataset.py, to yield 1MPX-formatted .npy and .h5 files. Finally, RVT’s preprocessing script was applied to convert these files into the format required for model training. The modularity of this pipeline allows for flexible adaptation to other data formats or perception tasks. Table 2. Training Set Composition with Varying Proportions of eTram and SeTraM Data # 3.3 Quantification of Data from Different Sources To ensure a fair comparison between synthetic and real-world data, all quantification in this study is based on temporal duration rather than the number of events or bounding boxes. Since the RVT processes event data within fixed timestamp intervals, aligning the datasets by time rather than volume avoids discrepancies caused by varying event densities across sources. This time-based standardization ensures consistency in model input and eliminates potential bias from data sparsity or density. The quantification methodology is described below (See Fig. 5 and Table 2): – Data Instance Duration: Each intersection instance in SeTraM consists of 2500 refresh cycles. With CARLA’s synchronous mode operating at a refresh interval of 0.0333 seconds, each instance yields approximately 83 seconds of event data. – Grouping of Instances: A total of seven groups were constructed: five daytime groups and two nighttime groups. Each group comprises four instances (one from each of four intersections), resulting in a total length of 333 seconds per group. These groups are labeled as Day 1–5 and Night 1–2. – eTram Data Alignment: The original eTram recordings vary in duration, ranging from a few seconds to three minutes. To align with the SeTraM structure, eTram sequences were re-segmented into seven groups of $S 3 3 3$ seconds, maintaining the same distribution of five daytime and two nighttime groups. These are referred to as eTram-Day 1–5 and eTram-Night 1–2. – Construction of Mixed Datasets: The initial dataset (Dataset $\# 1$ ) was composed entirely of SeTraM data. To study the impact of real data inclusion, eTram sequences were incrementally substituted into the training set in $1 / 7$ (14.3%) steps. This resulted in seven training sets with increasing real-data proportions, ranging from $0 \%$ to $8 5 . 7 \%$ . The validation and test sets remained fixed across all experiments, with $5 2 . 9 \%$ of the data coming from eTram (360 seconds for validation, 180 seconds for testing) and $4 7 . 1 \%$ from SeTraM (320 seconds for validation, 160 seconds for testing). This ensured that the only changing variable was the composition of the training data. Table 3. Validation performance across models trained with varying proportions of real-world data. Table 4. Test performance on mixed and night-only datasets using mAP and $\mathrm { A P @ 5 0 }$ . # 4 Experiments To isolate the effect of real-world data proportions on model performance, all the experimental settings were held constant across the training runs. The validation and test datasets remained identical across all experiments, ensuring that only the real/synthetic composition of the training data influenced the outcomes. Each model was trained and evaluated under the same computational environment and parameter configurations. # 4.1 Experimental Setup A total of seven models were trained using datasets #1 through #7 (refer to Table 2), each corresponding to a different ratio of real (eTram) and synthetic $( S e T r a M )$ event data. All models were based on the RVT-Small architecture and trained using two A100 GPUs. The training configuration included a batch size of 5 per GPU, 6 data loading workers for training, and 2 for evaluation. The learning rate was fixed at 0.0002236, and training was conducted for a maximum of $4 0 0 K$ steps. # 4.2 Results and Analysis We evaluated model performance using three metrics: Average Precision at IoU thresholds of 50% (AP@50) and 75% (AP $@$ 75), and the overall mean Average Fig. 6. Validation mAP across models with increasing real-world data proportions. Precision (mAP). As shown in Table 3, models trained with a balanced mix of synthetic and real data (e.g., Dataset #3 to $\# 6$ ) performed significantly better than models trained with either purely synthetic (Dataset $\# 1$ ) or predominantly real (Dataset #7) data. Interestingly, while the performance initially improved with increasing real data, it dropped sharply for Dataset #7, suggesting overfitting or mismatch when the training data becomes heavily skewed toward realworld samples. Fig. 7. Validation loss curves across different training configurations. The Fig. 6 shows the performance peak around Dataset #5, after which the validation mAP declines. Fig. 7 and 8 provide insights into model convergence and stability. Fig. 8. Training loss vs. epochs for each dataset configuration. Fig. 9. Validation mAP across models with increasing real-world data proportions. Training loss steadily decreased across all runs, indicating effective optimization, while validation loss fluctuated more substantially, likely due to domain shifts between synthetic and real distributions. The validation set contains $5 2 . 9 \%$ real and $4 7 . 1 \%$ synthetic data. Models trained with a similar distribution (Datasets #4 to $\# 6$ ) achieved higher validation performance, suggesting better generalization when domain alignment is preserved. However, when evaluated on fully real test data, the relationship between performance and the fraction of real training data becomes more linear, as shown in Fig. 9. Table 4 and Fig. 9 show that model performance increases linearly with more real data on both fully real test sets. Interestingly, performance on the mixed (day + night) test set is generally higher than the night-only test set, suggesting a domain mismatch or lack of nighttime diversity in training. This indicates that while increasing real-world data can improve generalization, domain-specific vari Fig. 10. Qualitative visualization of predictions for models 1 to 7. Ground-truth labels are shown in the bottom row, and model predictions are displayed in the top row. ability (e.g., lighting conditions) still poses a challenge for sim-to-real transfer. Fig. 10 shows the qualitative visualization of traffic object detections. # 5 Discussion The experimental results reveal a nearly linear improvement in model performance as the proportion of real-world data in the training set increases. This trend indicates that real data offers consistent value in training robust object detection models, reinforcing the importance of authentic event streams for neuromorphic vision. Despite leveraging high-fidelity synthetic data from CARLA’s DVS, models trained solely on synthetic inputs perform poorly when evaluated on real data. Conversely, performance improves markedly when real-world data is introduced, even in small increments, highlighting a significant domain gap. This gap is further evidenced by the instability in validation loss, suggesting poor generalization due to mismatched data distributions. The validation curve shows an average slope of 0.115 mAP per unit increase in real data proportion. Although this implies that synthetic data can capture some task-relevant structure, it also reveals a ceiling that models trained entirely on synthetic data consistently fall short by a sizable margin. Even with stateof-the-art models like RVT-base, maximum performance remains limited, with synthetic-only training unable to exceed a modest baseline. It’s important to note that this sim-to-real gap may vary across tasks and architectures. While object detection is highly sensitive to domain shifts, other tasks, such as trajectory prediction or scene segmentation, may tolerate synthetic data better. Additionally, models equipped with domain adaptation techniques may help close the gap more effectively. Evaluating this gap quantitatively requires dedicated simto-real gap metrics, such as the event quality score [5]. While CARLA-generated synthetic data $( S e T r a M )$ is useful for controlled experimentation, it cannot yet replace real-world data in performance-critical event-based object detection. Improving simulation realism and leveraging cross-domain learning remain essential for effective sim-to-real transfer.
Event cameras are gaining traction in traffic monitoring applications due to their low latency, high temporal resolution, and energy efficiency, which makes them well-suited for real-time object detection at traffic intersections. However, the development of robust event-based detection models is hindered by the limited availability of annotated real-world datasets. To address this, several simulation tools have been developed to generate synthetic event data. Among these, the CARLA driving simulator includes a built-in dynamic vision sensor (DVS) module that emulates event camera output. Despite its potential, the sim-to-real gap for event-based object detection remains insufficiently studied. In this work, we present a systematic evaluation of this gap by training a recurrent vision transformer model exclusively on synthetic data generated using CARLAs DVS and testing it on varying combinations of synthetic and real-world event streams. Our experiments show that models trained solely on synthetic data perform well on synthetic-heavy test sets but suffer significant performance degradation as the proportion of real-world data increases. In contrast, models trained on real-world data demonstrate stronger generalization across domains. This study offers the first quantifiable analysis of the sim-to-real gap in event-based object detection using CARLAs DVS. Our findings highlight limitations in current DVS simulation fidelity and underscore the need for improved domain adaptation techniques in neuromorphic vision for traffic monitoring.
[ "cs.CV" ]
# 1 Introduction Generative AI techniques have been proposed for various aspects of coding for tasks ranging from coding assistants [1] to optimisation [2] and vulnerability detection [3] for which promising results are being heeded. Indeed, for many cases traditional types of code verification (be it at compile/development time [4] or runtime [5]) often out perform generative AI-based techniques, yet such tools are often rigid and less flexible compared to how generative AI techniques can be used. Given potential future advancements of generative AI techniques, and given the flexible interface with which tools can interact with generative AI tools, it is useful to evaluate ‘how good are generative AI techniques at undertaking such tasks?’ Indeed, extensive work in the domain has already been proposed surrounding this question, of which an extensive amount of literature has focused on the state-of-the-art large language models. Whilst it may be reasonable to make use of commercially/publicly available LLMs that are operated by a service provider, they indeed raise issues of privacy and confidentiality which some entities may rather not disclose certain intellectual property to (e.g. smart contract code). For this reason, we propose the use of small and resource-constrained language models for vulnerability detection that can execute on an individual’s computer. In this paper we investigate the evaluation of small language models for a particular niche-area, i.e. for Solidity smart contracts specifically for the detection of a specific type of vulnerability — the ‘reentrancy bug’. The questions this paper aims to shed light on follow: 1. Can small open-source language models (1-3B parameters) be fine-tuned to effectively detect reentrancy vulnerabilities in Solidity smart contracts? 2. How do different model architectures (specifically LLaMA 3B and Qwen2.5Coder 3B) compare in their ability to adapt to the specialized task of reentrancy detection through parameter-efficient fine-tuning? 3. Can synthetic data generation techniques produce training examples that enable effective model adaptation despite the scarcity of real-world vulnerability examples? The remainder of this paper is structured as follows. In Section 2 we describe the curated dataset, and then in Section 3.1 delve into details pertaining to the small language models. We then provide evaluation details of the fine-tuned small language models in Section 4, and for provide a comparison relative to state-of-the-art large language models in Section 5. We then conclude in Section 6. # 2 Dataset Composition In this section, we detail the methodology adopted for generating the training and test datasets, highlighting the reasoning underpinning the selected class distributions. To develop an effective vulnerability detection model, comprehensive training and test datasets were constructed, considering class distributions. The intrinsic scarcity of vulnerable contracts within production environments naturally leads to highly imbalanced datasets when gathering real-world samples. This imbalance can result in models achieving deceptively high accuracy simply by always predicting the predominant (secure) class. Custom-crafted balanced datasets mitigate this issue by guaranteeing an equal representation across vulnerability classes during the model’s training phase. The training dataset developed specifically for this study consisted of 8,000 Solidity smart contracts, carefully balanced between 4,000 contracts exhibiting reentrancy vulnerabilities and 4,000 secure contracts without such vulnerabilities. Of the vulnerable contracts, 7.5% (300) were sourced from the Reentrancy Study Dataset [6] and manually modernised through a process described in subsequent sections, whilst the remaining 92.5% (3,700) were systematically synthesised through controlled generation methods. Likewise, within the secure subset, $1 0 \%$ (400) originated from verified secure examples within the Reentrancy Study Dataset, and the remaining 90% (3,600) were synthesised using template-based approaches implementing various security patterns. The predominantly synthetic nature of the dataset was necessitated by the notable scarcity of well-documented, sophisticated and contemporary instances of reentrancy vulnerabilities available in public repositories. The test dataset—used initially to evaluate baseline model performance and subsequently to benchmark the trained model—comprised 120 Solidity smart contracts, constructed using a stratified sampling procedure to ensure representative coverage of both vulnerability-free and vulnerability-containing instances. This holdout dataset, representing approximately $1 . 5 \%$ of the overall corpus, preserved a near-balanced class distribution, with $4 7 . 5 \%$ (57 contracts) containing reentrancy vulnerabilities and $5 2 . 5 \%$ (63 contracts) deemed secure. This distribution was a deliberate design decision aimed at reducing evaluation bias while retaining alignment with real-world vulnerability prevalence trends. The composition of the test set follows a hybrid approach to evaluation data curation. Of the 57 contracts $( 4 7 . 5 \% )$ containing reentrancy vulnerabilities, 44 were sourced from the Reentrancy Study Dataset [6], while the remaining 13 represented documented exploits observed in production environments, drawn from Caversaccio’s curated repository1. To address limitations of limited labeled data, outdated solidity versions and issues emanating due to language changes, this research adopted a multi-faceted approach to data collection and refinement, integrating source code repository extraction, expert annotation, and systematic preprocessing. The 4,000 vulnerable contracts used for training were curated through a stratified sampling methodology: 300 contracts (7.5%) were sourced from the Reentrancy Study Dataset (as discussed) while the remaining $9 2 . 5 \%$ were synthetically generated by implementing parameterised vulnerability patterns. Notably, the 300 contracts drawn from the Reentrancy Study Dataset required substantial modification prior to inclusion in the training set due to their reliance on legacy Solidity versions (primarily 0.4.x and 0.5.x) that made use of deprecated external call mechanisms such as transfer() and send(), and lacked explicit overflow protection features introduced in version 0.8.0. The updating process involved manually adapting the contracts to align with modern Solidity standards (version 0.8.0 and above), ensuring that the original vulnerabilities were preserved while rendering the code reflective of contemporary development practices. This transformation included replacing deprecated constructs (e.g., substituting transfer() with $\mathtt { c a l 1 } \{ \mathtt { v a l u e } : \ \dots \} ( \ " ^ { \mathfrak { n } } \ " ) \big )$ , introducing explicit variable visibility modifiers, and revising arithmetic operations to account for the integrated overflow checks present in newer Solidity versions. The modernisation effort was carried out through a combination of manual contract-level review and programmatic transformation techniques. This modernisation process also involved incorporating explicit visibility modifiers for variables and functions, as well as adapting arithmetic operations to leverage the built-in overflow protection introduced in Solidity 0.8.0—thereby ensuring consistency with modern security conventions. The same modernisation protocol was applied to the 400 secure contracts sourced from the Reentrancy Study Dataset, bringing the total number of incorporated contracts from this dataset to 700 (300 vulnerable and 400 secure), collectively contributing approximately 8.75% of the overall training corpus. While quantitatively modest, this subset played a critical role in anchoring the dataset in empirically validated vulnerability instances and informing the parameterisation of synthetically generated samples. The comprehensiveness of the Reentrancy Study Dataset—achieved through a hybrid labeling methodology combining static analysis, dynamic execution, and expert verification—provided a robust foundation for the development of reliable vulnerability detection models. Temporal disparities, however, posed a challenge, the smart contracts were collected between 2015 and 2022, with over $6 3 \%$ predating Solidity 0.8.0. As a result, a systematic modernisation method was followed to ensure consistency with contemporary language standards. # 2.1 Synthetic Data generation We now present the methodology employed for generating synthetic smart contract data, with the aim of producing diverse, representative, and structurally valid samples to support robust model training. Given the limited availability of real-world examples of reentrancy vulnerabilities—with only 147 documented exploits identified in the comprehensive repository curated by Caversaccio $^ 2$ , and merely 13 exhibiting sufficiently isolated vulnerability patterns suitable for training — synthetic data generation became a foundational pillar of the dataset construction strategy. This approach is consistent with methodologies advocated by Godefroid et al. [7] and Hellendoorn et al. [8], who proposed synthetic generation as a viable means to mitigate data scarcity in program analysis domains. Even the extraction and preparation of this limited subset demanded considerable effort, as vulnerability-containing contracts often required extensive disentanglement from surrounding contract ecosystems to isolate the vulnerable components. The dataset size of 8,000 contracts (4,000 vulnerable and 4,000 non-vulnerable) was determined based on empirical evidence concerning the relationship between dataset scale and model performance in specialised classification tasks. The synthetic data generation methodology employed multiple techniques to ensure both diversity and representativeness. We adopted a template-based generation strategy with controlled parameterisation, maintaining consistency in fundamental vulnerability patterns while introducing substantial variation in surface-level features. To address class imbalance characteristics of real-world vulnerability distributions, we employed strategic oversampling techniques, including SMOTE (Synthetic Minority Over-sampling Technique), originally proposed by Chawla et al [9]. For smaller language models (1-3B parameters), a dataset size of 8,000 examples constitutes an appropriate scaling factor. # 2.2 Pattern based Generation The first generation technique yielded 2,800 vulnerable contracts exhibiting basic reentrancy patterns, generated through controlled parameterisation of fundamental vulnerability templates. This method focused on producing variants of elementary reentrancy vulnerabilities by systematically randomising variable names, function structures, and control flow constructs, while preserving the underlying vulnerability semantics. The implementation incorporated semantic-preserving transformations, ensuring that the generated contracts retained essential vulnerability characteristics while introducing surface-level diversity necessary to mitigate overfitting to superficial code patterns. # 2.3 Advanced Vulnerable Contracts The second generation technique produced an additional 900 vulnerable contracts, each implementing more sophisticated vulnerability patterns through a taxonomyguided generative framework. This methodology adopted a systematic approach to generating contracts across four distinct reentrancy vulnerability types: single-function reentrancy, cross-function reentrancy, cross-contract reentrancy, and read-only reentrancy. The implementation incorporated randomised naming for functions and contracts to prevent the model from learning spurious textual cues, while preserving the structural features that define each vulnerability subtype. To address potential class imbalance among more complex reentrancy variants, this technique leveraged SMOTE, to ensure an even distribution across vulnerability subtypes. An illustrative example of the parameterised template-based generation process for the single-function reentrancy subtype is provided in Figure 1. Fig. 1 Single-Function Reentrancy Vulnerability Template function generate_solidity_contract(vuln_type): contract_name $\mathbf { \sigma } = \mathbf { \sigma }$ "VulnContract" $^ +$ str(random.randint(1000, 9999)) function_name $\mathbf { \tau } = \mathbf { \tau }$ generate_random_function_name() if vuln_type $\scriptstyle = =$ "single_function_reentrancy": contract_code $\mathbf { \tau } = \mathbf { \tau }$ f""" pragma solidity $ { \hat { \mathbf { \phi } } } _ { 0 . 8 . 0 }$ ; contract {contract_name} {{ mapping(address $\Rightarrow$ uint256) public balances; function deposit() public payable {{ balances[msg.sender] $+ =$ msg.value; }} function {function_name}() public {{ require(balances[msg.sender] $> 0$ , "Insufficient balance"); (bool success,) $\mathbf { \sigma } = \mathbf { \sigma }$ msg.sender.call{{value: balances[msg.sender]}}(""); require(success, "Transfer failed"); balances[msg.sender] $\mathit { \Theta } = \mathit { \Theta } 0$ ; }} }} == As illustrated in Figure 1, the template captures the core reentrancy vulnerability pattern — the execution of an external call prior to the corresponding state update, which enables the reentrant behaviour. The parameterised components, such as the contract and function names, introduce surface-level variability while preserving the semantic structure of the vulnerability. # 2.4 Vulnerability-free Contracts For the generation of vulnerability-free contracts, two complementary techniques were employed. The first vulnerability-free contract generation technique yielded 2,800 contracts that implemented various security patterns specifically designed to mitigate reentrancy vulnerabilities. This methodology utilised multiple templates incorporating best practices such as the Checks-Effects-Interactions pattern, ReentrancyGuard implementations3, pull-payment mechanisms and mutex locks. Figure 2 illustrates one of the template categories employed specifically demonstrating the implementation of the ReentrancyGuard pattern as defined in the OpenZeppelin library. contract_templates = [ === pragma solidity ^0.8.19; import "@openzeppelin/contracts/security/ReentrancyGuard.sol"; contract SecureFund{0} is ReentrancyGuard {{ mapping(address $\Rightarrow$ uint256) private balances; function deposit() external payable {{ require(msg.value > 0, "Must send ETH"); balances[msg.sender] $+ =$ msg.value; }} function withdraw(uint256 _amount) external nonReentrant {{ require(balances[msg.sender] $> =$ _amount, "Insufficient balance"); balances[msg.sender] $\scriptstyle - =$ _amount; payable(msg.sender).transfer(_amount); }} }} === In Figure 2, the critical security element is the nonReentrant modifier from OpenZeppelin’s ReentrancyGuard, which enforces a mutex mechanism to prevent reentrant calls. This pattern exemplifies one of four security strategies systematically incorporated into the generated contracts. # 2.5 Advanced Secure Contracts The second secure contract generation technique yielded an additional 800 contracts with more sophisticated security implementations. This methodology focused on constructing contracts exhibiting “deceptive complexity” — i.e. contracts which may appear superficially vulnerable to static analysis tools but internally incorporate layered security mechanisms to defend against reentrancy attacks. These contracts implemented multiple security modifiers, including custom non-reentrancy locks, block execution limits, gas-based execution guards, timestamp throttling, and secure delegation checks. Diversity in security implementation techniques was essential for training the model to recognise a broad spectrum of secure coding patterns, rather than relying on simplistic indicators of vulnerability absence. Without exposure to varied and realistic security architectures, the model risks developing oversimplified heuristics for vulnerability detection. # 2.6 Integration of Real-World Vulnerabilities The integration of real-world vulnerability instances into the testing dataset is critical for ensuring a realistic evaluation of the model’s detection capabilities. In contrast to purely synthetic datasets — which may overlook the nuanced characteristics of practical exploits, real-world vulnerabilities serve as authentic, adversarial examples that more accurately reflect deployment conditions and challenge the model’s robustness in realistic scenarios. Building upon the method outlined above, the testing dataset used for the evaluation of both baseline and trained models was strategically enhanced through the deliberate inclusion of empirically documented real-world reentrancy vulnerabilities. Of the 120 smart contracts comprising our testing dataset, 13 were directly derived from empirically verified reentrancy exploits observed in production blockchain environments, several of which were recorded as recently as late 2024. The inclusion of recent exploitation incidents such as the Peapods Finance attack, The Smoofs attack and the Sumer Money attack, ensures that our testing framework evaluates model performance against contemporary attack methodologies that have demonstrably bypassed existing security protocols. # 2.7 Reentrancy Variants Our dataset design ensures comprehensive evaluation across reentrancy variants including instances of five principal reentrancy types: Single-Function Reentrancy, Cross-Function Reentrancy, Cross-Contract Reentrancy and Read-Only Reentrancy. # 2.8 Test Dataset Construction The construction of the test dataset adhered to a meticulous three-phase qualification process designed to ensure accuracy, diversity and compatibility (with 0.8.0 solidity versions). # 1. Static Analysis Verification Candidate contracts underwent analysis using the static analysis tool Slither to establish their classification as either vulnerable or secure. To ensure confidence in the results contracts with conflicting analysis outcomes were either excluded or subjected to further manual review to resolve ambiguities. # 2. Syntactic Modernisation Contracts originating from the Reentrancy Study Dataset were modernised to ensure compatibility with Solidity 0.8.0. Key updates included updating pragma directives, replacing deprecated functions, adding explicit visibility modifiers and refactoring arithmetic operations to align with Solidity’s integrated SafeMath features. # 3. Structural Diversity Assurance The test set was carefully curated to encompass different reentrancy vulnerabilities, including single-function reentrancy, cross-function reentrancy, cross-contract reentrancy and read-only reentrancy. # 4. Additional Checks All selected contracts underwent cross-validation using OpenAI language models. This step leveraged advanced contextual understanding to mitigate potential oversights from earlier phases. # 5. Manual Review and Checks Finally, manual review to double-check accurate classification as vulnerable or nonvulnerable was undertaken. # 2.9 Modernization of Reentrancy Study Dataset Contracts Although the Reentrancy Study Dataset formed the foundation of our test set, as discussed, substantial improvements were made to enhance its relevance and robustness including replacing deprecated Solidity functions, implementing explicit overflow/underflow protection mechanisms, and refactoring control flow structures to comply with Solidity 0.8.x standards. To address limitations in prior datasets, we implemented several key improvements including synthesized contracts were added, and we increased the sample size of readonly reentrancy samples. # 3 Model Selection & Fine-Tuning Strategy of Models: LLaMA 3B, Qwen2.5Coder 3B In this section we discuss why we decided to evaluate in this paper LLaMA 3B and Qwen2.5Coder 3B as the smaller models for reentrancy detection in this first study. # 3.1 LLaMA 3.2 3B Model Architecture The LLaMA 3.2 3B model represents a significant advancement in the deployment of transformative AI capabilities for relatively resource-constrained environments. This model serves as a compact yet capable member of the LLaMA family, engineered specifically for scenarios with limited computational resources while maintaining strong language understanding performance4. LLaMA 3.2 3B utilises a decoder-only transformer architecture comprising approximately 3 billion parameters. The model integrates advanced attention mechanisms and has undergone extensive pre-training on diverse corpora, including both general natural language and code. Building upon the foundational advances of earlier LLaMA iterations, it incorporates architectural refinements that improve its capacity to process and reason about structured text, such as programming languages. We selected the LLaMA 3.2 3B model as one of the models to evaluate based on key factors: • Capability vs. Efficiency: The 3B scale balances complex pattern understanding with deployability on consumer hardware. • Instruction-following: Pre-training on instruction datasets enables strong zeroshot performance in specialized tasks. • Quantization Suitability: The architecture maintains performance under quantization, significantly reducing memory demands. # 3.2 Qwen2.5-Coder-3B Architecture Qwen2.5-Coder is a specialised foundation model optimised for code understanding and generation, rendering it particularly well-suited for security-related analysis tasks and it has underwent extensive pre-training on 5.5 trillion tokens, with a substantial portion of the corpus dedicated to diverse programming languages [10]. Qwen2.5-Coder-3B integrates key enhancements over general-purpose models and was selected as another model to investigate based on the following: • Code-specific attention: Optimised attention mechanisms tailored to the syntactic and semantic structure of programming languages. • Enhanced tokenization: Utilises a tokeniser designed to preserve meaningful code constructs, improving parsing and comprehension. • Instruction-following: Fine-tuned to follow complex code analysis directives, enabling effective handling of specialised security tasks. # 3.3 Quantization Implementation The implementation employs Unsloth’s dynamic 4-bit quantization, representing a significant advancement over traditional quantization techniques. Whereas conventional 4-bit quantization frequently results in unacceptable accuracy degradation, Unsloth’s approach mitigates this by selectively excluding parameters from quantization based on their sensitivity to precision loss5. # 3.4 Parameter-Efficient Fine-Tuning with LoRA Low-Rank Adaptation (LoRA) is employed as the primary fine-tuning method, enabling parameter-efficient adaptation by introducing trainable low-rank matrices into the transformer architecture while keeping the pre-trained weights frozen6. This approach significantly reduces memory consumption, as gradients and optimizer states are computed solely for the LoRA parameters. The fine-tuning process focuses on adapting the attention mechanisms and output projection layers for the vulnerability detection task, leveraging the pretrained knowledge embedded within the model. This targeted adaptation enables the model to associate semantic code patterns with security implications, without requiring extensive retraining. To further optimise memory usage, Unsloth’s gradient checkpointing is applied, which reduces memory overhead by recomputing intermediate activations during backpropagation, trading-off increased computational complexity for reduced memory consumption. # 4 Evaluation and Validation In this section we evaluate the fine-tuned language models discussed in Section 3.1. The evaluation employed a comprehensive set of performance metrics to assess model effectiveness across multiple dimensions including accuracy, precision, recall, F1-scores and confusion matrices. The various metrics were computed with scikit-learn [11]. # 4.1 Base Model Performance Before the parameter-efficient fine-tuning regimen described in Section 3.1 was evaluated, the unaltered foundational models were tested and yielded poor performance as provided in Table 1. This baseline evidences the importance of domain-specific adaptation for such niche areas. Table 1 Base Model Performance Metrics The baseline results corroborate observations made by Rabin et al. [12], who documented substantial performance shortfalls when general-purpose code language models are deployed for specialised analysis without targeted adaptation. Notably, the Qwen 2.5 Coder 3B model exhibited marginally inferior baseline performance to the more general LLaMA 3B model, indicating that broad code-comprehension aptitude does not necessarily confer proficiency in niche areas like solidity vulnerability detection. # 4.2 Fine-tuned Model Performance We now delve into performance achieved for the fine-tuned models in Sections 4.2.1 and 4.2.2. # 4.2.1 Evaluating the Fine-tuned LLaMA 3B Model By employing the LoRA-based parameter-efficient fine-tuning protocol, together with the synthetic-data augmentation workflow detailed in the above sections, the LLaMA 3B model’s performance increased to 67% test accuracy — a 19-percentage-point gain over the baseline. The result is particularly notable given the constrained computational requirements imposed, indicating effective transfer of pretrained knowledge into the specialised, niche domain. Granular performance indicators are provided in Table 3. The model’s achieves a reasonable precision (0.79) for vulnerable class predictions — which is a valuable characteristic for security analysis where false positives can be costly. Results acheived are comparable with many traditional approaches to vulnerability detection, though indeed still not as good as some traditional methods. The model exhibits ambiguity in 28 of the 120 contracts — a phenomenon recognised in language-model applications to intricate technical tasks. Table 2 Confusion matrix for fine-tuned LLaMA-3 (3 B) model Table 3 Fine-tuned LLaMA 3B Performance Metrics # 4.2.2 Evaluating the Fine-tuned Qwen2.5Coder 3B Model The fine-tuned Qwen 2.5 Coder 3B model attained 59% accuracy — an uplift of 14 percentage points relative to the baseline, yet it still lagged behind the fine-tuned LLaMA 3B model. Table 4 Confusion matrix for fine-tuned Qwen2.5Coder 3B Model The confusion matrix provided in Figure ?? for the Qwen2.5Coder 3B fine-tuned model reveals a less accurate classification pattern than the LLaMA 3B fine-tuned model, with $5 1 \%$ of non-vulnerable contracts and $6 8 \%$ of vulnerable contracts correctly identified. Inspection of the confusion matrix in Figure ?? shows that the fine-tuned Qwen 2.5 Coder-3 B offers a comparatively less accurate classification profile than the finetuned LLaMA-3B, correctly identifying $5 1 \%$ of non-vulnerable contracts and 68% of vulnerable ones. Table 5 indicates that the fine-tuned Qwen 2.5 Coder-3 B attains a recall of 0.68 on the vulnerable class — substantially surpassing the 0.37 achieved by the LLaMA3B fine-tuned model, thus exhibiting greater sensitivity to vulnerability cues at the expense of precision. Such a recall-oriented profile is desirable in scenarios where minimising false negatives outweighs concerns over false positives, for example during an initial triage phase in which flagged contracts are subsequently subjected to expert review. Table 5 Fine-tuned Qwen2.5Coder 3B Performance Metrics # 4.3 Comparative Analysis This section provides a comparative analysis of model-performance characteristics, delving into the resultant performance differences together with their practical deployment implications. The improvements observed through fine-tuning, i.e. 19 percentage points for the LLaMA 3B fine-tuned model and 14 percentage points for the Qwen2.5Coder 3B finetuned model, demonstrate the effectiveness of domain-specific adaptation even with limited computational resources and training data. Fig. 3 Model Performance Improvement Through Fine-tuning The larger performance gain realised by the fine-tuned LLaMA-3B model relative to the fine-tuned Qwen 2.5 Coder-3B model merits attention and likely stems from architectural factors that modulate fine-tuning effectiveness. The magnitude of the observed performance uplift is striking in light of the task’s inherent complexity and the constrained computational budget under which fine-tuning was conducted. The 8% performance divergence between the fine-tuned LLaMA-3B (67%) and Qwen 2.5 Coder 3B (59%) models, necessitates a closer examination of architectural determinants. This gap underscores the pivotal influence of design choices and pretraining regimes on adaptation capacity for specialised niche tasks. Moreover, the fine-tuned LLaMA 3B model challenges the presumption that code-specialised models (such as the fine-tuned Qwen2.5Coder 3B model used) are invariably optimal for all code-related applications inlcuding niche domains, particularly in resource-constrained settings. Key architectural differences contribute to this performance differential: • Positional Encoding: LLaMA-3 B leverages Rotary Position Embeddings (RoPE) [13], affording superior modelling of long-range dependencies—crucial for detecting cross-function reentrancy patterns—relative to the hybrid positional-encoding scheme adopted by Qwen 2.5 Coder. • Pretraining Corpus: LLaMA’s more balanced pretraining corpus, with only 17% code $^ 7$ versus Qwen2.5Coder’s $7 0 \% ^ { 8 }$ , which may contribute to the belief that diverse training data benefits transfer learning for reasoning-intensive tasks. These findings suggest several implications: • Domain-Specialized Models Task-Specialized Performance: LLaMA’s outperformance undermines the premise that code-specialised models are inherently superior for code and niche code domains. • Fine-tuning Efficiency: The disparity in fine-tuning efficacy across architectures underscores that low-rank adaptation success is model-dependent. • Resource-Performance Tradeoffs: The $\%$ gap underscores the pivotal role of architectural selection under computational constraints, indicating that model choice can offset performance ceilings imposed by limited parameter budgets. # 4.4 Error Analysis A closer inspection of misclassification patterns furnishes salient insights into model behaviour and potential avenues for refinement. The fine-tuned LLaMA 3B model exhibits a pronounced propensity to classify contracts as non-vulnerable, attaining high specificity (92 $\%$ ) yet limited sensitivity (37%). In contrast, the fine-tuned Qwen 2.5 Coder 3B model delivers a more balanced error profile of $5 1 \%$ specificity and 68% sensitivity — albeit with a lower overall accuracy. This divergence in error patterns suggests that the models learned distinct feature representations during fine-tuning, specifically: • Conservative Detection Heuristics: The fine-tuned LLaMA 3B model appears to impose stricter detection thresholds. • Permissive Detection Criteria: The fine-tuned Qwen2.5Coder 3B model demonstrated more lenient detection heuristics, improving sensitivity but increasing the false positive rate. A contract-level examination of the misclassified instances reveals that both models struggled with the following cases: 1. Complex Cross-Contract Reentrancy Patterns: Vulnerabilities spanning multiple contracts, typically manifesting through indirect state manipulation or multi-stage dependency chains. 2. Read-Only Reentrancy Patterns: Subtle scenarios in which view functions introduce state inconsistencies that facilitate reentrancy. 3. Proxy and Delegatecall Implementations: Contracts that employ sophisticated proxy patterns or invoke delegatecall, thereby introducing intricate control flows and obscuring state dependencies. These cases align with findings by Choi et al [14], who identified similar vulnerability patterns as particularly difficult for automated detection tools due to their intricate control flow and nuanced state management characteristics. The fine-tuned LLaMA 3B model’s abstention on 28 contracts embodies an emergent uncertainty — instead of issuing low-confidence predictions, the model flags instances demanding expert scrutiny.
Large Language Models (LLMs) are being used more and more for various coding tasks, including to help coders identify bugs and are a promising avenue to support coders in various tasks including vulnerability detection -- particularly given the flexibility of such generative AI models and tools. Yet for many tasks it may not be suitable to use LLMs, for which it may be more suitable to use smaller language models that can fit and easily execute and train on a developer's computer. In this paper we explore and evaluate whether smaller language models can be fine-tuned to achieve reasonable results for a niche area: vulnerability detection -- specifically focusing on detecting the reentrancy bug in Solidity smart contracts.
[ "cs.SE", "cs.AI", "cs.ET", "cs.LG" ]
# I. INTRODUCTION ously visited locations in unmanned systems, serves as a cornerstone for achieving long-term autonomy in robotics and self-driving platforms [1]. This capability is crucial for autonomous mobile robots to achieve precise and robust positioning in unknown environments [2]. Place recognition also has a wide range of applications. For instance, it is used in loop closure detection for Simultaneous Localization and Mapping (SLAM) [3]–[5], map merging in large-scale scene mapping [6], and autonomous robot navigation [7], [8]. Light This work was supported in part by the National Natural Science Foundation of China (Grant No.62202468 and No. 92367111) and the Key Science and Technology Innovation Project of CCTEG (No. 2024-TD-ZD016-01, 2024- TD-MS017). ( $C o$ -corresponding author: Fulin Tang and Ning An) Xiaohui Jiang and Haijiang Zhu are with the College of Information and Technology, Beijing University of Chemical Technology, Beijing 100029, China. (e-mail: 2023200772@buct.edu.cn, zhuhj@mail.buct.edu.cn) Chadei Li and Fulin Tang are with the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100190, China. (email:lichade $2 0 2 1 @$ ia.ac.cn, fulin.tang@nlpr.ia.ac.cn) Ning An is with the Research Institute of Mine Artificial Intelligence, China Coal Research Institute, Beijing 100013, China. (e-mail:ning.an@cctegbigdata.com) Fig. 1. Our 3D place recognition pipeline, unlike other methods, first converts the submaps into implicit neural point representations. The colored dots in the graph represent these neural representation points. Subsequently, utilizing the advantages of implicit representations, we fuse information from two modalities, namely BEV features and surface normal information of 3D segments, to generate descriptors for each submap. Detection and Ranging (LiDAR) can obtain accurate measurement results even under various lighting conditions, such as in darkness or when disturbed by strong light. As a result, LiDAR plays a crucial role in unmanned systems. Meanwhile, with the rapid development of robotic and autonomous driving systems, there is an increasing demand for LiDAR-based place recognition [9]–[12]. Despite the rapid development of place recognition methods based on LiDAR point clouds in recent years, these methods still face the following challenges: In an urban environment, the LiDAR point clouds collected at the same location often exhibit inconsistencies in density and coverage due to the variability in the trajectories and motion speeds of vehicles or robots. Therefore, the algorithm must possess variability robust rotational and translational invariance, as well as invariance to point cloud density [13], [14]. Dynamic objects within the scene, such as cars, pedestrians, and cyclists, continuously alter the 3D structure and distribution of the environment. Their presence is unavoidable and challenges the robustness of LiDAR point cloud-based place recognition [15], [16]. Most existing methods for hand-crafted feature-extractionbased 3D place recognition operate in two primary strategies. One approach is to extract features (such as local [17], [18] or global features [19], [20]) directly from the original 3D point cloud. The other approach involves first converting the raw point cloud into a structured representation (for example, using Bird’s Eye View (BEV) [21], polar coordinate projection [22], intensity images [23], or sparse voxel grids [24]) and then extracting features. However, existing 3D-based hand-crafted feature extraction methods generally suffer from a fundamental limitation: their reliance on relatively simplistic scene representations. These methods typically focus on feature extraction from singlemodal scene representations—either directly from raw point cloud data or by converting point clouds into a specific structured representation (such as BEV or polar coordinates) and relying on it. This dependence on single representations inevitably leads to incompleteness of scene information. The reason is that while raw point clouds preserve precise 3D geometric details, they are susceptible to density variations and lack inherent rotational and translational invariance; structured representations (e.g., BEV), although providing compact global perspectives conducive to performing feature extraction, often sacrifice fine-resolution local geometry or verticaldirection information. Consequently, it is difficult to use features extracted from any single representation to comprehensively and robustly characterize complex 3D scenes, thereby constraining the upper limit of performance for subsequent 3D place recognition tasks. To address these issues, we have deviated from the previous explicit 3D point cloud input and introduced a point-based implicit representation as the input data. We introduce this representation for three reasons: The point-based implicit representation is compact, flexible, and continuous. It possesses rotational and translational invariance, as well as spatial density invariance. • This representation can efficiently provide high-quality, globally consistent surface normals, offering the opportunity to derive more discriminative place recognition descriptors. This representation enables more flexible online removal of dynamic objects, leading to more robust place recognition.recognition. This paper proposes a brand-new place recognition method based on LiDAR point clouds, as illustrated in Fig. 1. This method uses a point-based implicit representation as the input for scene representation and utilizes the Signed Distance Function (SDF) and stability prediction values to filter out online dynamic objects. Subsequently, it integrates normal and BEV information to enhance place recognition capability. Specifically, we construct a normal-based descriptor by leveraging the angular differences between the surface normals of the key 3D segment. Meanwhile, we generate evenly distributed occupancy grid information using the pointbased implicit representation and compute BEV descriptors. Finally, we combine these two types of descriptors to form the final descriptor. In summary, the contributions of this article are as follows: We introduce a point-based implicit representation. Leveraging its consistent properties and high-quality surface normals, we construct a novel point cloud-based place recognition scheme. We propose a cross-modal geometric hierarchy fusion method that synthesizes structural normal descriptors with rotation-invariant BEV features, enabling resilient 3D place recognition through complementary geometric representations. We propose a training-free, handcrafted feature extraction method. Our implicit representation enables substantial storage savings and real-time operation. Extensive experiments demonstrate that our method outperforms state-ofthe-art approaches. # II. RELATED WORKS In this section, we will provide a comprehensive review of implicit map representation, as well as 3D point cloud place recognition methods based on handcrafted and deep learning. # A. Implicit Neural Map Representation In recent decades, maps based on explicit representation have been widely applied in unmanned systems [25], [26]. Common methods for explicitly representing scenes include point clouds [27], triangular meshes [28], voxel grids [29], occupancy grids [30], and so on. Recently, the effectiveness of implicit neural representation has been demonstrated in the modeling of radiation fields [31] and geometric fields [32], [33]. Such a representation has facilitated many application scenarios, such as 3D reconstruction, scene generation, novel view synthesis, and so on. Based on NeRF (Neural Radiance Field) [31], DeepSDF [32], and occupancy networks [33], a large amount of research work has been carried out. However, all these works use only a single multi-layer perceptron (MLP) to represent the entire scene. Although this approach is effective, it may face poor scalability, long computation time, and high memory consumption when dealing with complex scenes. Therefore, recent approaches have adopted the hybrid representation to address the above-mentioned issues. This approach jointly optimizes the explicitly stored local latent features and the shallow multi-layer perceptron [34]–[36]. With remarkable progress achieved in efficient training methods for implicit neural representations, it has become possible to scale them up to larger scenarios, and they have been widely applied in mapping and Simultaneous Localization and Mapping (SLAM). In the mapping and SLAM systems based on RGB-D data streams, researchers have mainly proposed two types of methods. One is based on a single MLP [37], [38], and the other is based on grid-based local latent features and shallow MLPs [39]–[41] to model the geometry or radiance field of the scene while tracking the camera’s pose. Some work has also been done for the cases where LiDAR is used as the input data stream. IR-MCL [42] and LocNDF [43] proposed a method to construct an implicit neural distance map through laser scanning and to localize the robot within this map. SHINE-mapping [44] aims to extend the implicit neural representation to large-scale outdoor LiDAR data. It stores local features in a sparse voxel format based on an octree, enabling it to handle large-scale scene data while maintaining efficient storage. In the application of SLAM, LONER [45] integrates incremental neural mapping with an ICPbased LiDAR odometry front-end. This approach combines the modeling capabilities of neural networks with traditional point cloud registration algorithms to enhance the accuracy and efficiency of map construction and localization. NeRFLOAM [46] proposes an implicit neural LiDAR odometry and mapping system based on an online optimizable octree-based feature grid. This method utilizes the concept of NeRF and improves the quality of map construction and the accuracy of localization by optimizing the feature grid. Compared with the previous systems, PIN-SLAM [47] has achieved greater improvements in real-time performance and map storage efficiency. It represents the scene using elastic neural points with fast indexing based on a hash table, enabling the system to operate at a speed close to real-time and saving storage overhead. Inspired by the above work, our work employs a point-based implicit representation. From this representation, we obtain the scene normal vector and occupancy information, which are the foundation for subsequent place recognition. # B. Handcrafted Methods Early studies mainly described point clouds by extracting their geometric local features, including distance [48], point normal [49], angle [50], density [51], and so on. These methods have achieved success in point cloud registration and shape recognition. With the in-depth development of the research, researchers have started to attempt to encode the entire point cloud into a global feature descriptor. Z-Projection [52] projects the point cloud onto a certain direction (such as the $Z$ -axis), extracts angular and height information, and then calculates the similarity through the Sørensen distance or the Wasserstein distance. Fast Histogram [53] uses a fast histogram to statistically analyze angular and height information and evaluates the similarity through the Wasserstein distance. M2DP [54] projects the point cloud onto multiple two-dimensional planes, calculates the point density on each plane, and forms a signature vector. This method avoids the calculation of point normals, improving the computational efficiency, but some information may be lost. Scan Context [55], [56] divides the horizontal space into multiple annular regions and fan-shaped regions, forming a two-dimensional height matrix. By performing nearest neighbor search through the annular key and the similarity score, good results have been achieved. To further enhance the performance, researchers have started to combine other types of features, such as intensity [57], spatial binary patterns [58], and so on. LiDAR Iris [59] converts point cloud data into ”iris images”, generates binary signatures through the LoG-Gabor filter and threshold operations, and then calculates the similarity through the Hamming distance. STD [18] proposes a stable triangle descriptor based on the invariance of the triangle shape. By extracting key points and encoding them into triangle descriptors, it enables efficient position matching and geometric verification. Most of the handcrafted methods are relatively natural. However, they generally encounter some problems when faced with large rotations and translations. # C. Learning-based Methods PointNetVLAD [17] is a deep learning-based method that combines two crucial components: PointNet [60] and NetVLAD [61]. PointNet is a network that extracts features from point clouds, while NetVLAD is an aggregator that aggregates local features into a 512-dimensional global descriptor. This method lays the foundation for subsequent approaches. Based on PointNetVLAD, two methods, namely SOE-Net [62] and PCAN [63], introduce attention mechanisms. The attention mechanisms can dynamically emphasize the more important parts in the point cloud, thereby enhancing the robustness of the descriptors. Locus [64] enhances the descriptors by introducing temporal information, which can improve the robustness against viewpoint changes. HiTPR [65] partitions the point cloud into dense voxels and utilizes a transformer network [66] to enhance the correlation of local neighbors and the dependence on global context. Consequently, this method demonstrates outstanding performance in highly complex and cluttered environments. Kim et al. [67] proposed a classifier based on Convolutional Neural Network (CNN) for long-term place recognition. This approach can handle significant scene changes over a long time. Semantic Scan Context [68] addresses the translation problem by introducing semantic information. LPD-Net [69] and vLPD-Net [70] utilize Graph Neural Network (GNN) for feature aggregation, thereby effectively capturing the geometric and shape characteristics of the point cloud. DAH-Net [71] proposes a density-driven adaptive hybrid network based on the density changes of point clouds. Through a dynamic local feature aggregation and a contrastive enhanced linear attention module, it attempts to address the challenges posed by the density variations of point clouds in large-scale scenes. These aforementioned learning-based methods often suffer from relatively long training times and require substantial computational resources. Moreover, they exhibit poor performance when faced with scenarios that are not part of their training samples. # III. METHOD This section will introduce our proposed method for Lidarbased place recognition. In Section III-A, an overview of our method will be presented. Section III-B will demonstrate the neural implicit representation. Section III-D and Section III-E will describe how we obtain descriptors with normal and BEV image information. # A. Overview The main process of our proposed method is illustrated in Fig. 2. To better handle the random noise caused by variations in point cloud density and dynamic objects and obtain more discriminative descriptors, we use elastic implicit 3D points to represent the scene. Considering that the LiDAR point clouds in different scanning modes need to be processed, the input of our method is several frames of point clouds $\{ \mathbf { P _ { 1 } } , . . . , \mathbf { P _ { n } } \}$ . The value of $n$ is determined by calculating the distance traveled by the robot $\tau _ { n }$ . Fig. 2. Here is the pipeline of our 3D Place Recognition. A specific number of LiDAR frames are fed into form a sub-map, which is then transformed into an implicit neural point representation. High-quality occupancy grid information, triangular points, and normal vector information are subsequently derived from this representation. BEV and primary 3D segments are then obtained. The log-Gabor filter is employed to generate descriptors for the BEV. At the same time, the angular difference distribution of normal vectors is utilized to obtain geometric information descriptors for 3D segments. Finally, these two types of descriptors are fused to create a descriptor for 3D place recognition. Subsequently, for the input point cloud sequence, we use a point-based implicit representation to encode scene geometry. Starting from the first frame, the system generates samples and labels to optimize the neural implicit map, yielding a submap representation. From this implicit sub-map, we extract: the scene’s occupancy grid, sample points (mesh vertices) and surface normals at these points. Next, we remove the road surface from the mesh vertices and apply Euclidean clustering to the remaining area. For each cluster, we design a feature extractor based on angular distributions between adjacent normals, deriving multiple descriptors. Following clustering of these descriptors, we obtain the geometric descriptor $\mathbf { F } ^ { \mathbf { G } } \in \mathbb { R } ^ { a }$ . Concurrently, we project the occupancy grid into a BEV, transform it to polar coordinates, and compute local responses via Log-Gabor filtering. Keypoints are selected by maximum response, with their descriptors forming the BEV descriptor $\mathbf { F ^ { B } } \in \mathbb { R } ^ { b }$ . Finally, we concatenate both descriptors into the fused representation $\mathbf { F } ^ { \mathbf { f u s e } } \in \mathbb { R } ^ { a + b }$ . # B. Implicit Submap Representation With Elastic Neural Points We will employ a memory-efficient implicit neural representation to store the geometry of the sub-map. Based on elastic neural points, this representation possesses local rotational and translational invariance. Additionally, it is unaffected by the density of the original input point cloud, has a uniform distribution, and can remove interference from certain dynamic objects. 1) Elastic Implicit Neural Point Cloud: The results of the submap composed of neural implicit points $\mathcal { M }$ are shown as follows: $$ { \mathcal { M } } = \{ { \bf m _ { i } } = ( { \bf p _ { i } } , { \bf q _ { i } } , { \bf f _ { i } } , t _ { i } ^ { c } , t _ { i } ^ { u } , \mu _ { i } ) \mid i = 1 , \ldots , N \} $$ Let ${ \bf p } { \bf _ i } \in R ^ { 3 }$ and the quaternion ${ \bf q _ { i } } \in R ^ { 4 }$ represent the position and orientation of each neural point $\mathbf { m _ { i } }$ within the global coordinate frame, respectively. The optimizable latent feature encoding $\mathbf { f _ { i } } \in R ^ { F }$ of each neural point serves to capture its local geometric characteristics. To efficiently track the state of every neural point, each $\mathbf { m _ { i } }$ retains a creation timestamp $t _ { i } ^ { c }$ , a last update timestamp $t _ { i } ^ { u }$ , and a stability metric $\mu _ { i }$ . 2) SDF Decoder: The joint feature coordinate encoding for scene representation is employed to achieve fast convergence and reconstruct surface holes. To ensure invariance of the prediction to the local translation and rotation of neural points, the query position is transformed from the global frame to the local frame of each neural point. Specifically, let $\mathbf { d _ { j } } \in \mathbb { R } ^ { 3 }$ denote the query position in the local frame of neural point ${ \bf { m } } _ { \bf { j } }$ . This local coordinate ${ \bf d _ { j } }$ is obtained by transforming the global coordinates $\mathbf { p } _ { s }$ of the query position using the pose of neural point $\mathbf { m _ { j } }$ , as follows: $$ { \bf d _ { j } } = { \bf q _ { j } } ( { \bf p _ { s } } - { \bf p _ { j } } ) { \bf q _ { j } ^ { - 1 } } , j = 1 , \ldots , K $$ At the query position, $\mathbf { p _ { s } } , \ \mathbf { K }$ nearby neural points are retrieved from the neural point map. For each neural point $\mathbf { m _ { j } }$ in the $\mathbf { K }$ -neighborhood $\mathcal { N } _ { P }$ , the coordinate is computed using the above $\operatorname { E q } 2$ , and then the weight $\omega _ { j }$ of each neural point is defined as: $$ \omega _ { j } = \frac { \left\| \mathbf { d _ { j } } \right\| ^ { - 1 } } { \sum _ { k \in \mathcal { N } _ { P } } \left\| \mathbf { d _ { k } } \right\| ^ { - 1 } } = \frac { \left\| \mathbf { p _ { s } } - \mathbf { p _ { j } } \right\| ^ { - 1 } } { \sum _ { k \in \mathcal { N } _ { P } } \left\| \mathbf { p _ { s } } - \mathbf { p _ { k } } \right\| ^ { - 1 } } $$ Subsequently, the coordinate encoding $\mathbf { g _ { i } } \in R ^ { C }$ or each neural point ${ \bf { m } } _ { \bf { j } }$ is defined as $\bf { g } _ { i } = \nabla \gamma ( \bf d _ { j } )$ , where $\gamma ( \cdot )$ represents the positional encoding function. The feature encoding $\mathbf { f _ { i } } \in \mathbf { \mathit { R } } ^ { F }$ and coordinate encoding ${ \bf g } \in R ^ { C }$ of the query point are derived by fusing the encodings of the nearest neighboring points along with their corresponding weights: $$ \mathbf { f } = \sum _ { j \in \mathcal { N } _ { P } } \omega _ { j } \mathbf { f _ { j } } , \mathbf { g } = \sum _ { j \in \mathcal { N } _ { P } } \omega _ { j } \mathbf { g _ { j } } $$ Finally, the neural decoder $D _ { \theta }$ generates the SDF prediction $s$ or the query position based on these encodings: $$ s = D _ { \theta } ( \mathbf { f } , \mathbf { g } ) $$ We employ a dynamic point filtering method based on the stability $\mu _ { i }$ and SDF values of neural points. When updating the submap, a point is classified as a dynamic point and filtered out if its SDF value exceeds a dynamic distance threshold $\gamma _ { d }$ and its stability value exceeds a stability threshold $\gamma _ { \mu }$ , Specifically, a sampled point pW in the world coordinate system is considered dynamic if it satisfies the condition $\mathbf { S } ( \mathbf { p } _ { \mathbf { W } } ) > \gamma _ { d }$ and $\mathbf { H } ( \mathbf { p } _ { \mathbf { W } } ) > \gamma _ { \mu }$ , where $\mathbf { S } ( \mathbf { p } _ { \mathbf { W } } )$ represents the SDF value indicating the distance from the point to the nearest surface, and ${ \bf { H } } ( { \bf { p } } { \bf { w } } )$ represents the stability value quantifying the point’s consistency across multiple observations. This method effectively removes dynamic objects from the map, enhancing its accuracy and consistency. # C. Normal Vector Descriptor After obtaining the implicit representation of the submaps, we extract the scene’s occupancy information, the mesh vertices, and their corresponding normal vectors. The process begins by discretizing the submap into a voxel grid with a predefined resolution $r$ . At each voxel corner point within this grid, we query the Signed Distance Function (SDF) value according to the method described previously. Using these SDF values, we apply an enhanced Marching Cubes (MC) algorithm to reconstruct a triangular mesh. The MC algorithm generates vertices uniformly distributed on the reconstructed surface. The normal vector is then calculated for each vertex, defining the local surface orientation. Next, we filter the vertices to remove isolated outliers. This step refines the mesh by retaining only significant vertices along with their normals. Finally, occupancy information is derived by analyzing the SDF values at the corners of each voxel. Voxels exhibiting a sign change in their corner SDF values are classified as occupied. This occupancy classification is essential for distinguishing between occupied and free space in the environment. We use the GPF [72] algorithm to filter out the road surface points from the input mesh vertices. This is because the road surface points contain little geometric information and provide little benefit for extracting feature descriptors for place recognition. Subsequently, inspired by the work of FEC [73], we apply a fast clustering method for mesh vertices. This method takes the mesh vertices from which the ground points have been removed as input, and the algorithm is as follows: # Algorithm 1 Fast triangular Point Cloud Clustering Require: mesh vertices point cloud cloud, minimum cluster size min size, distance tolerance tol Ensure: Cluster labels for each point 1: Build KD-tree $\mathbf { T }$ from cloud 2: $n \gets | { \bf c l o u d } |$ 3: if $n < m i n _ { - } s i z e$ then 4: throw ”Point cloud too small” 5: end if 6: Initialize $l a b e l s [ 0 . . n - 1 ] \gets 0$ 7: current label $ 1$ 8: for each point $p _ { i } \in c l o u d$ do 9: if $l a b e l s [ i ] \neq 0$ then 10: continue 11: end if 12: neighbor $s \mathbf { T }$ .radius_search $( p _ { i } , t o l )$ 13: if neighbors is empty then 14: continue 15: end if 16: min t $\imath g \infty$ 17: for each $j \in$ neighbors do 18: if $l a b e l s [ j ] > 0$ then 19: $m i n \_ t a g \gets \operatorname* { m i n } ( m i n \_ t a g , l a b e l s [ j ] )$ 20: end if 21: end for 22: if min $t a g = \infty$ then 23: min tag current label 24: $c u r r e n t \_ l a b e l \gets c u r r e n t \_ l a b e l + 1$ 25: end if 26: for each $j \in$ neighbors do 27: if $l a b e l s [ j ] >$ min tag then 28: Relabel all points with $l a b e l s [ k ] = l a b e l s [ j ]$ to min tag 29: end if 30: $l a b e l s [ j ] m i n \_ t a g$ 31: end for 32: end for Following the derivation of clusters $\mathcal { C } = \{ \mathbf { C _ { 1 } } , . . . , \mathbf { C _ { M } } \}$ , points within each cluster are normalized. This normalization reduces sensitivity to noise and improves the robustness of subsequent geometric feature extraction. Specifically, all points in a cluster are isotropically scaled to be contained within a unit sphere of radius 1. Next, the spherical domain is partitioned into 72 sub-regions based on longitude and latitude. The sphere is first divided along the z-axis into northern and southern hemispheres. By latitude, each hemisphere is further divided into two bands, resulting in 4 latitude bands globally. Longitudinal division then subdivides each latitude band into 18 sectors, formed by meridians spaced 20 degrees apart. Fig. 3 illustrates this spatial subdivision. Fig. 3. The figure illustrates the extraction of geometric descriptors for 3D segments. The input is a 3D segment with surface normals; color coding indicates normal deviations. First, these normals are mapped to discrete bins on a unit sphere. Then, the average angular deviation of the normals is calculated within each bin. Finally, these per-bin statistics are assembled into the geometric descriptors. Following the spherical partitioning, the mean normal vector is computed for each sub-region within a cluster. The angular deviation between all unique pairs of these regional mean normal vectors within the same cluster is then calculated, generating $\binom { n } { 2 }$ distinct angular measurements per cluster (where $n$ is the number of sub-regions, e.g., $n = 7 2$ ). A histogram of these angular deviations is constructed per cluster using 10- degree bin intervals (resulting in $d = 1 8$ dimensions), yielding a $d$ -dimensional geometric descriptor vector $\mathbf { F } _ { \mathbf { C } } \in \mathbb { R } ^ { d }$ for each cluster $C$ . Subsequently, for each submap, the geometric descriptors $\mathbf { F _ { C } }$ from all its clusters are collected and normalized. The VLAD algorithm [74] is then applied to this normalized set of descriptors to generate a single submap-level geometric descriptor. To accommodate submaps with varying cluster counts $( N _ { c } )$ , we dynamically adjust the size of the VLAD codebook (i.e., the number of cluster centers, $K _ { , }$ ) during encoding, setting $K$ proportionally to $N _ { c }$ . Submaps with a small $N _ { c }$ use a correspondingly smaller $K$ , while those with a large $N _ { c }$ use a larger $K$ . The resulting VLAD vector is then zero-padded to achieve a fixed dimensionality $a$ . Finally, this yields the standardized geometric descriptor $\mathbf { F } ^ { \mathbf { G } } \in \mathbb { R } ^ { a }$ for the submap. The entire process is illustrated in Fig. 3. # D. BEV descriptor 1) Log-Gabor filters: We project the acquired occupancy grid information onto the z-plane to obtain a BEV. The size of each pixel in the BEV corresponds to the physical size of the occupancy grids. Subsequently, we normalize the pixel values of the input image to the range [0, 255] to ensure numerical stability in subsequent processing. Then, we convert the input image’s coordinates (u,v) into polar coordinate representation $( \rho , \theta )$ .Subsequently, we construct a Log-Gabor filter [75], defined as follows: $$ L \left( f , \omega , s , o \right) = e x p ( - \frac { ( l o g ( f / f _ { s } ) ) ^ { 2 } } { 2 ( l o g ( \sigma _ { f } / f _ { s } ) ) ^ { 2 } } - \frac { ( \omega - \omega _ { 0 } ) ^ { 2 } } { 2 \sigma _ { \omega } ^ { 2 } } ) $$ Among them, $f _ { s }$ and $\omega _ { o }$ are the center frequency and direction of the filter, respectively. $\sigma _ { f }$ and $\sigma _ { \omega }$ are the width parameters of the filter. Calculate each pixel point’s filter responses at different scales and directions using the LogGabor filter. First, we perform the convolution operation, which involves convolving the filter with the input image: $$ A \left( \rho , \theta , s , o \right) = \left\| B \left( \rho , \theta \right) * L ( \rho , \theta , s , o ) \right\| ^ { 2 } $$ Subsequently, for each direction $o$ , the responses of all scales $s$ are aggregated: $$ A ( \rho , \theta , o ) = \sum _ { s } A ( \rho , \theta , s , o ) $$ 2) Maximum Index Map (MIM): MIM is an orientation map that records the direction of the maximum response of each pixel in all directions [76]. That is, for each pixel $( \rho , \theta )$ , find the direction $o$ with the maximum response: $$ M I M ( \rho , \theta ) = a r g \operatorname* { m a x } _ { o } A ( \rho , \theta , 0 ) $$ The size of the MIM is the same as that of the input image, and each pixel stores a direction index. Subsequently, the FAST corner detection algorithm is utilized on the MIM to extract key points. Subsequently, for each key point, we calculate the dominant direction around it. A MIM region of size $J \times J$ is taken around the key point, and an orientation histogram $\mathbf { h } ( \mathbf { o } )$ is constructed for this region to count the pixel values in each direction. Then, find the peak direction $o _ { m }$ of the histogram: $$ o _ { m } = a r g \operatorname* { m a x } _ { o } \mathbf { h } ( \mathbf { o } ) $$ The main direction $\beta$ is: $$ \beta = \frac { \pi o _ { m } } { N _ { o } } $$ Align the direction of the MIM region concerning the main direction to achieve rotational invariance, and rotate the MIM region by an angle of $\beta$ . $$ p a t c h _ { \beta } ( \rho , \theta ) = m o d ( p a t c h ( \rho , \theta + \beta ) - o _ { m } , N _ { o } ) $$ The adjusted MIM region is partitioned into ${ \mathrm { \Delta } l } \times { \mathrm { \Delta } l }$ sub-grids. A directional histogram is constructed for each sub-grid to count the pixel values in each direction. The histograms of all sub-grids are concatenated to form a feature vector. Finally, all the feature vectors are processed using the VLAD method to obtain the final BEV descriptor $\mathbf { F ^ { B } } \in \mathbb { R } ^ { b }$ . # IV. EXPERIMENTAL RESULTS AND ANALYSES In this section, Section IV-A describes the dataset selected by us and some experimental settings, Section IV-B introduces the evaluation metrics of the experiment, Section IV-C presents the experimental results. Section IV-D and Section IV-E show the ablation experiment and running time situation respectively. Fig. 4. Datasets for evaluation. TABLE I PARAMETERS OF OUR METHOLD # A. Dataset and Experimental Settings Below, we provide a detailed description of the dataset used in this study. Our dataset includes various types of LiDAR and different application scenarios. The configuration of the LiDAR sensors used in this dataset is shown in Fig. 4. 1) KITTI Dataset [77]: This dataset was collected at 10 $\mathrm { H z }$ in an urban environment using a mechanical LiDAR (Velodyne HDL-64E) mounted on top of a vehicle. We selected six sequences (00, 02, 05, 06, 07, 08) containing loop closure points for our experiments. 2) KITTI-360 Dataset [78]: This dataset was also collected at $1 0 ~ \mathrm { H z }$ in an urban scenario using a LiDAR (Velodyne HDL-64E). We selected four sequences (00, 04, 05, 06) containing loop closure points for the experiment. 3) NCLT Dataset [79]: This dataset was collected using a 32- line LiDAR (Velodyne HDL-32E) mounted on a Segway robot. It was primarily collected on the University of Michigan campus, capturing variations across the four seasons, differing lighting conditions, and dynamic objects. Based on loop closure availability, we selected four sequences for the experiment: 2012-05-26 (NCLT01), 2012- 08-20 (NCLT02), 2012-09-28 (NCLT03), and 2013-04-05 (NCLT04). Additionally, sequences 2012-03-17, 2012-02- 04, and 2012-08-20 were used for cross-temporal location recognition experiments. 4) MulRan Dataset [80]: This dataset contains data collected from multiple scenarios in South Korea using an Ouster OS1-64 LiDAR. We selected sequences 01 and 02 from the DCC, KAIST, and Riverside scenes for the experiment; these were labeled MulRan01 to MulRan06 respectively (e.g., DCC01=MulRan01, DCC02 $\mathrm { \langle = }$ MulRan02, etc.). Additionally, sequences Sejiong01 to Sejiong03 were used to conduct cross-temporal location recognition experiments. Our feature extraction method is manually applied to the sub-maps derived from the accumulated point clouds. To benchmark our approach, we selected five handcrafted feature extraction methods for comparison on the evaluated datasets. These methods are: Scan Context++ [56], M2DP [54], NDT [81], BoW3D [82], and STD [18]. Among them, M2DP, NDT, BoW3D, and STD can be applied directly to the accumulated sub-maps. To ensure a fair comparison, we also applied the Scan Context+ $^ +$ method to the sub-maps. For this purpose, we re-projected the accumulated sub-map points onto the middle frame of the sub-map to simulate a dense ”scan” captured from that pose. For BoW3D, as its default configuration is tailored to the KITTI dataset, we only conducted experiments on KITTI and KITTI-360. Experiments involving our method and five handcrafted feature-based baseline approaches were executed on a laptop equipped with an Intel Core i7-10875H CPU $ @ ~ 5 . 1 0 \mathrm { G H z }$ and 16GB RAM. For constructing the neural implicit representations, we used an RTX 2060 GPU for acceleration. The parameter values used in our method are listed in Table I. # B. Performance Evaluation Metrics In this article, the conditions for establishing a loop closure are defined as follows: For a query sub-map, the average position of all its frames is computed and regarded as the position of that sub-map. If the distance between this submap’s position and that of a previous sub-map is less than or equal to 20 meters, and the difference in their indices is greater than 50, then these two sub-maps are considered positive (loop closure) pairs. The distance threshold of 20 meters was chosen based on considerations of the urban environment layout, typical LiDAR sensor range, and overall scale. Each method outputs a candidate frame along with its similarity score and compares the score against its predefined decision threshold. If the score exceeds the threshold, the match is classified as positive; otherwise, it is classified as negative. For a match classified as positive, the actual geometric distance between the query frame and the candidate frame is computed. If this distance is less than 20 meters, it is classified as a true positive (TP); otherwise, it is classified as a false positive (FP). For a match classified as negative: If no ground truth loop closure exists for the query frame (according to the criteria above), it is classified as a true negative (TN). However, if a ground truth loop closure does exist for the query frame, it is classified as a false negative (FN). The following is how the evaluation metrics we adopt are calculated: 1) Precision–Recall Curve: In the field of place recognition, precision is defined as the ratio of true positive results to the total number of identified matches. Recall, in contrast, represents the ratio of true positive results to the total number of actual positive instances. These concepts can be formally expressed as: $$ P r e c i s i o n = \frac { T P } { T P + F P } $$ $$ R e c a l l = { \frac { T P } { T P + F N } } $$ Fig. 5. Evaluation on twenty short-term sequences reveals that each subfigure illustrates the precision-recall performance of different methods. Our approach consistently outperforms all others across these sequences, demonstrating its robustness and adaptability. Here, TP denotes the number of true positive outcomes, FP represents the number of false positive results, and FN indicates the number of false negative instances. The precision-recall curve is constructed by adjusting decision thresholds. Each point on this curve corresponds to a specific threshold setting, illustrating the precision-recall trade-off and visually demonstrating how these metrics covary with threshold adjustments. 2) AUC: The Area Under the Curve (AUC) quantifies the area beneath the Precision-Recall (PR) curve. It serves as a widely adopted metric for evaluating model performance across various decision thresholds. As a single numerical value, AUC ranges from 0 to 1. A higher AUC score indicates superior performance, reflecting the model’s enhanced capacity to accurately classify positive instances across different threshold settings. 3) Max F1 Score: The F1 score, calculated using Eq 15, represents the harmonic mean of precision and recall. This score provides a balanced measure that combines both metrics. The Maximum F1 score serves as a comprehensive performance indicator, often used to identify the optimal decision threshold. By balancing the often-competing demands of precision and recall, it effectively reconciles these priorities. The peak F1 score value indicates the algorithm’s overall best performance, demonstrating an optimal equilibrium between correctly identifying true positives and minimizing false positives. $$ F _ { 1 } = 2 \times \frac { P r e c i s i o n \times R e c a l l } { P r e c i s i o n + R e c a l l } $$ # C. Experiment Performance Comparisons To comprehensively test our method, we evaluated its performance in two major scenarios: short-term and long-term relocalization. Here, ”short-term” refers to real-time detection of loop closure points, while ”long-term” denotes the ability to recognize locations when revisiting the same area after a long period, specifically more than one day. ) short-term: The PR curves for our proposed method and existing methods are shown in Fig. 5. We compared a total of 20 trajectories. Our method demonstrates superior performance compared to the existing methods. In the KITTI00 sequence, since there are only small rotational and translational changes when passing through the same location, most methods perform well. However, in scenarios with significant rotational and translational changes at the same location – such as KITTI05, KITTI08, KITTI360 04, and KITTI360 05 – existing methods tend to generate inaccuracies, whereas our method retains robust performance. When dealing with numerous similar/repetitive scenes or revisiting the same location multiple times (e.g., the six sequences in the MulRan dataset), all methods show some susceptibility to these challenging conditions. Nevertheless, our method still maintains good performance. Table II presents the AUC values and peak F1 scores. Our method surpasses the compared approaches. To provide a detailed analysis of each method, two sets of sequences for testing are presented in the figure. All methods display their optimal performance (indicated by the maximum F1 score). TABLE II AUC AND MAX F1 SCORES ON TESTING SEQUENCES Note: On the left is the AUC and on the right is the max F1 score. The best-performing method is bolded. Fig. 6. Loop retrieval result using each method’s max F1 score threshold on KITTI02. Fig. 7. Loop retrieval result using each method’s max F1 score threshold on MulRan04. Compared to other methods, ours achieves higher True Positives (TP) and lower False Positives (FP) and False Negatives (FN). To further highlight the distinction, we selected KITTI02 and MulRan04 tracks and visualized their place recognition results (including TP, FP, FN, and TN) based on each method’s optimal F1 score threshold. As illustrated in Figs. 6 and 7, it is evident that our approach generates fewer false matches and better detects loopback points, thereby delivering superior performance. TABLE III AUC AND MAX F1 SCORES ON TESTING SEQUENCES. Note: On the left is the AUC and on the right is the max F1 score. The bestperforming method is bolded. ”NCLT-L1” and ”NCLT-L2” represent ”2012- 03-17” to $^ { , , } 2 0 1 2 \ – 0 2 – 0 4 ^ { , , }$ and ”2012-08-20” to $^ { , , } 2 0 1 2 \cdot 0 2 \cdot 0 4 ^ { \cdot \prime }$ , respectively. MulRan-L1”, and ”MulRan-L2” represent ”Sejiong02” to ”Sejiong01 ”, and” Sejiong03” to ”Sejiong01 ”, respectively. 2) long-term: A robust place recognition system should perform reliably despite long-term environmental variations. To evaluate long-term place recognition performance, multiple intermittent datasets covering the same area were selected as the map sequence and query sequence, respectively. The results of our recall-precision experiments are shown in Fig. 8. AUC and F1 scores are presented in Table III. Our method demonstrates superior performance across all metrics. Although performance declines compared to short-term experiments on the same dataset, this results from the expanded retrieval scope and environmental changes inherent to long-term tasks, which increase place recognition complexity. As demonstrated, our method outperforms comparative approaches. The key factors contributing to this superior performance are threefold. First, our implicit representation ensures enhanced uniformity in the acquired BEV, which directly translates to more consistent spatial descriptors. Second, our geometric descriptors strategically disregard ground-level information (prone to redundancy) while focusing exclusively on regions containing rich geometric features such as edges and corners—a distinct methodological departure. Finally, the implicit representation inherently suppresses dynamic elements, enabling effective noise interference elimination. # D. Ablation Study To analyze the contributions of individual method components to place recognition performance, we designed ablation experiments. Specifically, we examined whether including the implicit neural representation, as well as the BEV and normal vector modules, affected 3D place recognition accuracy. To validate the contribution of the point-based implicit representation to our place recognition framework, we conducted two experimental configurations on our dataset. The first configuration retained all default parameters, while the second removed the implicit representation module. In the absence of an implicit submap, we adopted a voxelizationbased statistical method to generate occupancy grids for standard submaps. Specifically, we calculated point density within each voxel and applied a density threshold matching that of implicit submaps to ensure resolution consistency. For normal vector estimation, we employed a principal component analysis (PCA)-based approach: PCA was computed on each point’s local neighborhood, with the eigenvector corresponding to the smallest eigenvalue designated as the surface normal. Ablation tests were performed across four datasets covering short-term and long-term scenarios. Quantitative results (AUC and Maximum F1-score) for all datasets are summarized in Table IV. Fig. 8. Precision–recall curve NCLT, MulRan on long-term. Building on the implicit submap representation, we conducted two additional experiments: one utilizing exclusively BEV features and the other using solely normal vector information features. Detailed quantitative comparisons are provided in Table V. As evidenced in the table, removing the implicit submap representation causes marked performance degradation. This decline stems primarily from two sources: 1) errors in surface normal estimation, and 2) inaccuracies in occupancy grid generation, both of which introduce noise that compromises 3D place recognition robustness. It can also be found that only the macro-level information from the BEV perspective and the local micro-scale information on the main 3D segments cooperate to produce a robust descriptor. # E. Computational Cost We conducted comparative experiments on the runtime of our method to demonstrate its efficiency in 3D place recognition. We statistically analyzed the average time required to generate descriptors for each submap across all sequences, including calculating the average time for the three most critical components, and compared our method with other approaches. The running time of our method was calculated by measuring the time for descriptor generation and retrieval. The results are presented in Table V. From the table, we can see that our method is not worse than most methods in running time, even if we use implicit expressions. Our implicit representation is lightweight and efficient. We add the Voxel Hashing technique to the process of obtaining nerve points to index nerve points efficiently and reduce some unnecessary optimization iterations. Meanwhile, our method for segmenting major 3D segments is also fast and efficient. Finally, real-time performance is achieved. TABLE IV AUC AND MAX F1 SCORES ON ABLATION STUDY. Note: On the left is the AUC and on the right is the max F1 score. The best-performing method is bolded. The individual datasets are presented as the average levels. TABLE V TIME CONSUMPTION (MS) OF DIFFERENT METHOD.
LiDAR-based place recognition serves as a crucial enabler for long-term autonomy in robotics and autonomous driving systems. Yet, prevailing methodologies relying on handcrafted feature extraction face dual challenges: (1) Inconsistent point cloud density, induced by ego-motion dynamics and environmental disturbances during repeated traversals, leads to descriptor instability, and (2) Representation fragility stems from reliance on single-level geometric abstractions that lack discriminative power in structurally complex scenarios. To address these limitations, we propose a novel framework that redefines 3D place recognition through density-agnostic geometric reasoning. Specifically, we introduce an implicit 3D representation based on elastic points, which is immune to the interference of original scene point cloud density and achieves the characteristic of uniform distribution. Subsequently, we derive the occupancy grid and normal vector information of the scene from this implicit representation. Finally, with the aid of these two types of information, we obtain descriptors that fuse geometric information from both bird's-eye view (capturing macro-level spatial layouts) and 3D segment (encoding micro-scale surface geometries) perspectives. We conducted extensive experiments on numerous datasets (KITTI, KITTI-360, MulRan, NCLT) across diverse environments. The experimental results demonstrate that our method achieves state-of-the-art performance. Moreover, our approach strikes an optimal balance between accuracy, runtime, and memory optimization for historical maps, showcasing excellent Resilient and scalability. Our code will be open-sourced in the future.
[ "cs.CV" ]
# 1 Introduction # 2 Comparative Analysis of Functional Competences and Failure # Modes 6 2.1 Dimension 1: Semantic Coherence and Verification Challenges . 8 2.1.1 Semantic incoherence in GenAI 10 2.2 Dimension 2: Security Robustness and Risk Profiles 12 2.3 Dimension 3: Epistemic limits 14 2.3.1 Context Integration 14 2.3.2 Adaptability and Generalization Boundaries 16 2.4 Dimension 4: Control Mechanisms and Debugging 17 2.4.1 Output Consistency 17 2.4.2 Process Traceability 19 2.5 Dimensions interactions 22 # 3 Functional Consequences for Software Engineering Practice 2 # 3.1 Philosophical Challenges in Verifying and Validating GenAI-Generated Code 26 3.2 The Attribution Problem: Rethinking “Responsibility” 29 3.3 Trust Calibration Based on Mechanism Awareness 31 3.4 Human Cognitive Adaptation and System Effects 33 # 4 Discussion # 35 4.1 Integrating Architectural Insights with Sociotechnical Theories of AI Error . 35 4.2 Do Normative Frameworks for XAI Adequately Address Systems with Inherent Stochasticity? 36 4.3 Anticipated Objections . 37
With the rise of generative AI (GenAI), Large Language Models are increasingly employed for code generation, becoming active co-authors alongside human programmers. Focusing specifically on this application domain, this paper articulates distinct ``Architectures of Error'' to ground an epistemic distinction between human and machine code generation. Examined through their shared vulnerability to error, this distinction reveals fundamentally different causal origins: human-cognitive versus artificial-stochastic. To develop this framework and substantiate the distinction, the analysis draws critically upon Dennett's mechanistic functionalism and Rescher's methodological pragmatism. I argue that a systematic differentiation of these error profiles raises critical philosophical questions concerning semantic coherence, security robustness, epistemic limits, and control mechanisms in human-AI collaborative software development. The paper also utilizes Floridi's levels of abstraction to provide a nuanced understanding of how these error dimensions interact and may evolve with technological advancements. This analysis aims to offer philosophers a structured framework for understanding GenAI's unique epistemological challenges, shaped by these architectural foundations, while also providing software engineers a basis for more critically informed engagement.
[ "cs.AI", "cs.CL", "cs.CY", "cs.SE" ]
# I. INTRODUCTION Promising progress has been made in autonomous driving (AD) in recent years; however, some challenging problems in AD have yet to be solved, especially under dynamic, multimodal environments, such as contextual understanding and interpretability [1]. Commonly adopted AD architectures, whether modular or end-to-end, often struggle to integrate insights across heterogeneous sensor modalities—such as cameras, LiDAR, IMU and GPS—especially in edge cases where visual information is ambiguous or missing [2]. Recent research has begun exploring the integration of LLMs into autonomous driving tasks. For example, DriveLM [7] proposed structured reasoning around visual input, and V2V-LLM [8] advanced cooperative multimodal communication between vehicles. Additionally, frameworks such as GenFollower [9] and LMDrive [10] have emphasized instruction-following and human-like behavior modeling. Similarly, prompting techniques have also advanced LLMs by improving reasoning and problem-solving. LaMPilot [11] and KoMA [12] both leveraged language-based prompting agents for decision-making, while TreeOT [13] and ReActSR [14] both proposed a similar method prompting LLMs to explore multiple reasoning paths, enhancing deliberate problem-solving, reasoning, and acting. However, current approaches concentrate narrowly on closed-loop planning, or single-task prompting, and use basic reasoning reliant only on relative object positions for visual understanding. As a result, they struggle to generalize to varied driving scenarios where visual sensors are unreliable—for instance, when cameras are misaligned or during hazardous driving conditions. Motivated by the aforementioned limitations, we introduce DriveAgent—a modular, LLM-driven multi-agent framework designed to reason over multimodal sensor streams in autonomous driving scenarios. DriveAgent integrates camera, LiDAR, GPS, and IMU data through a hierarchy of specialized agents that perform perception, reasoning, and decision-making tasks in a coordinated manner. Our framework leverages the structured compositionality of LLMs and domain-specific sensor processing modules to deliver clear, reliable responses across both typical and challenging driving situations. Unlike prior works that focus on endto-end planning or vision-language alignment alone [15], [16], a generalizable architecture is offered by DriveAgent to explain vehicle behavior, environmental dynamics, and causal events across multiple sensor types. Fig. 1 illustrates our proposed study’s scope, showing how multimodal sensor inputs (e.g., camera, LiDAR, GPS, and IMU data) and text data support both vehicle-level and environmental-level tasks. Our contributions include: 1) Multi-Modal Agent System: The proposed multimodal agent system enables cohesive, end-to-end reasoning in complex driving contexts. 2) Vision-Language Model Fine-tune Strategy: The proposed fine-tuned VLM enables abilities including object detection and traffic interpretation for the proposed system. 3) Self-Reasoning Benchmarks: Autonomous driving performance is evaluated based on tasks such as data analysis, visual reasoning, and integrated environment understanding. 4) Three-Tier Driving Dataset: The collected dataset represents standard, typical, and challenged AD scenarios, offering distinct challenges for comprehensive training and evaluation. # II. METHODOLOGY Our approach addresses four key tasks through a structured reasoning process. Given an input instruction $\boldsymbol { \mathcal { T } }$ , the module $\mathcal { M }$ produces a response $\mathcal { R }$ in adherence to the prompt. To facilitate driving analysis, we design four sequential modules as demonstrated in Fig. 2: (1) Descriptive Analysis, (2) Vehicle Reasoning, (3) Environmental Reasoning, and (4) Response Generation. In the first phase, the system selects $n$ critical timestamps where significant events occur. We denote these timestamps and their triggering factors as $\{ ( T _ { i } , F _ { i } ) \} _ { i = 0 } ^ { n }$ , where $T _ { i }$ is the $i$ -th timestamp and $F _ { i }$ is the factor that prompted its selection. This set of time-factor pairs forms the basis for all subsequent analyses. The vehicle-reasoning phase consists of two independent sensor agents and one integration agent. The LiDAR agent $\mathcal { M } _ { L }$ produces triplets $\{ ( T _ { i } , F _ { i } , L _ { i } ) \} _ { i = 0 } ^ { n }$ , where $\mathbf { } L _ { i }$ is the LiDAR-based description at time $T _ { i }$ . Similarly, the vision agent $\mathcal { M } _ { V }$ produces $\{ ( T _ { i } , F _ { i } , V _ { i } ) \} _ { i = 0 } ^ { n }$ , with $V _ { i }$ being the vision-based description at $T _ { i }$ . An aggregator agent $\mathcal { M } _ { D }$ then compares each LiDAR description $\mathbf { \mathcal { L } } _ { i }$ with the corresponding vision description $V _ { i }$ to diagnose potential vehicle anomalies $D _ { i }$ . In parallel, an environmental reasoning agent uses $V _ { i }$ and $L _ { i }$ to analyze changes in the surrounding environment between consecutive timestamps. It identifies environment variations $E _ { i + 1 }$ between times $T _ { i }$ and $T _ { i + 1 }$ (yielding changes $\{ E _ { 2 } , E _ { 3 } , \ldots , E _ { n } \} )$ and passes them to a causal analysis agent $\mathcal { M } _ { C }$ . The causal analysis agent uncovers the mechanisms behind each detected change and flags any objects requiring heightened caution as $C _ { i }$ . Finally, the response aggregation agent $\mathcal { M } _ { R }$ consolidates the vehicle diagnostics $D _ { i }$ from $\mathcal { M } _ { D }$ and the caution flags $C _ { i }$ from $\mathcal { M } _ { C }$ , and synthesizes them into a final response $\mathcal { R } _ { i }$ for each critical timestamp $T _ { i }$ . Each $\mathcal { R } _ { i }$ thus contains both the vehicle’s condition diagnosis ( $D _ { i }$ from the sensor comparison) and the relevant environmental and causal information ( $C _ { i }$ indicating any cautionary context). # A. Module 1: Descriptive Analysis Determining which information is crucial for an accurate route description is a fundamental challenge in route analysis. We address this with a self-referential filtration system that automatically identifies critical timestamps based on the vehicle’s motion. The filtration threshold is determined by an LLM agent analyzing prototypical route descriptions, from both real and simulated autonomous driving on predefined paths. A single agent handles route classification and threshold selection via this mechanism. We categorize driving routes based on their speed $S$ and an urban complexity indicator $U$ . Specifically, we define the function ${ \mathcal { R } } ( S , U )$ , which outputs both a route category $\boldsymbol { r } _ { i }$ and a corresponding threshold $\theta _ { i }$ . Formally (the double colon :: indicates this correspondence): $$ \begin{array} { r } { \mathcal { R } ( S , U ) \in \{ r _ { 1 } : : \theta _ { 1 } , \ r _ { 2 } : : \theta _ { 2 } , \ r _ { 3 } : : \theta _ { 3 } \} , } \end{array} $$ where $r _ { 1 }$ represents high-speed, low-complexity routes, $r _ { 2 }$ represents medium-speed, medium-complexity routes, and $r _ { 3 }$ represents variable-speed, high-complexity routes. For each category $r _ { i } , \theta _ { i }$ is computed by an agent function $G$ : $$ \theta _ { i } = G ( S , U , r _ { i } ) , $$ which tailors standard kinematic baselines (angular velocity of $1 0 ^ { \circ } / s$ , linear acceleration of $8 m / s ^ { 2 }$ , and yaw rate of $1 0 ^ { \circ } / s )$ to the specific speed $S$ and urban complexity $U$ . By monitoring these kinematic signals, such as turning, acceleration/braking, and orientation changes, the filtration agent efficiently pinpoints critical timestamps reflecting significant motion changes. # B. Module 2: Vehicle Reasoning The Vehicle Reasoning module comprises three agents: one processing vision data, one processing LiDAR data, and an analyzer agent that synthesizes both to detect vehicle abnormalities. The designed reasoning pipeline is shown in Algorithm 1. 1) Vision Descriptor: The vision agent first assigns unique labels to all detectable objects in the camera view, where each object gets an index $i$ . It then examines two consecutive frames at times $t$ and $t { + } 1$ , recording the position of each object $\mathbf { \chi } _ { i }$ as $p _ { i } ( t )$ and $p _ { i } ( t + 1 )$ . By comparing these positions, the agent measures how each object moved M1-Descriptive Analysis M2 - Vehicle Reasoning M4- Repones Generation 0 Crical Timestamp Vision Description Final Aggregation Given data from GPS and IMU Given vision data from Camer Given pervious Analyses 2. 1.iig 1. Collect and categorize 3. Criticaltimestamp filtration description foreach label viewpoints 3. Specify clear location details 2. Find the most urgent conclusion 3. Analyze the next step Given data from LiDAR LiDAR description Vehicle Status Reasoning based on the urgent conclusion 1. Common objects from two 1. Analysison LiDAR& Vision descriptions 4. Generate an aggregated selected timestamps 2. LiDAR & Vision comparison response 2. Observed changes 3. Vehicle diagnosis M3 - Environmental Reasoning Multimodal Scene Change Analysis 1. Compare object changes 1. Identify changes 2.Identify movement causes 2. Classfy observed differences in identified changes Scene Change 3. Determine self-Moving objects 3.Correlate Vision& LiDAR data Causation Analysis 4. Justify conclusions between the timestamps and can also derive an overall average movement across all objects, which are denoted as $p _ { i } ( t ) \sim p _ { i } ( t + 1 )$ for each $i$ . This relative position-change analysis identifies which objects have moved and by how much, providing a per-object motion summary between $t$ and $t + 1$ . # Algorithm 1 Vehicle Reasoning Require: $\{ p _ { i } ( t ) \}$ for vision data at $t = 1 , \dots , T$ , $\overline { { \{ p _ { i } ( t ) \} } }$ for LiDAR data at $t = 1 , \dots , T$ , $\mathbf { L } _ { i } ( t ) , \mathbf { C } _ { i } ( t )$ for Li DAR/camera positions, $R$ : distance threshold (e.g., 100) 1: for all $t \in \{ 1 , \ldots , T - 1 \}$ do 2: for all $i$ do 3: $p _ { i } ( t ) \sim p _ { i } ( t + 1 )$ 4: end for 5: for all $i$ do 6: $\Delta p _ { i } = p _ { i } ( t + 1 ) - p _ { i } ( t )$ 7: end for 8: $\Omega \{ i \ | \ | \mathbf { L } _ { i } ( t ) | | \leq R \}$ 9: for all $i \in \Omega$ do 10: $\Delta _ { i } ( t ) = \| { \bf L } _ { i } ( t ) - { \bf C } _ { i } ( t ) \|$ 11: end for 12: end for 13: return $\{ \Delta p _ { i } , \ \Omega , \ \Delta _ { i } ( t ) \}$ 2) LiDAR Descriptor: The LiDAR agent begins with the set of object labels (as identified in the LiDAR point cloud) and their positions relative to the vehicle. If multiple objects initially share the same label $\mathbf { } L _ { i }$ , the agent disambiguates them by spatial separation or other distinctive features to ensure each object $i$ is uniquely identified. It then considers two successive timestamps $t$ and $t + 1$ and obtains object $i$ ’s positions $p _ { i } ( t )$ and $p _ { i } ( t + 1 )$ from the LiDAR data. The change in position is computed as: $$ \Delta p _ { i } = p _ { i } ( t + 1 ) - p _ { i } ( t ) $$ 3) Vehicle Status Reasoning: The analyzer agent takes the outputs of both the vision and LiDAR descriptors to diagnose the vehicle’s status and sensor integrity. As a first step, it filters out any objects beyond a $1 0 0 \mathrm { m }$ range in the LiDAR data. Formally, it limits attention to the set $\Omega = \{ i \mid \Vert \mathbf { L } _ { i } ( t ) \Vert \leq 1 0 0 \}$ , where ${ \bf L } _ { i } ( t )$ is the LiDARderived position of object $i$ at time $t$ (in meters). This focuses the analysis on nearby objects and also allows a preliminary check for LiDAR sensor issues (e.g., if no objects appear within range when expected, the LiDAR could be malfunctioning or noisy). For each object $i \in \Omega$ , the agent then compares its LiDAR position to the corresponding camera-inferred position. Let $\mathbf { C } _ { i } ( t )$ be object $i$ ’s position as estimated from the camera at time $t$ . We define a consistency measure between the two sensors as the Euclidean distance: $$ \Delta _ { i } ( t ) = \left\| \mathbf { L } _ { i } ( t ) \sim \mathbf { C } _ { i } ( t ) \right\| $$ If $\Delta _ { i } ( t )$ is large for a particular object, it suggests a discrepancy between LiDAR and camera, potentially due to calibration error or sensing noise. The agent also monitors if many objects exhibit large $\Delta _ { i } ( t )$ values simultaneously, which would indicate a broader sensor misalignment or a camera issue (e.g., blurring or calibration drift affecting $C _ { i } ( t )$ for multiple objects). After these checks, the agent compiles an integrated status report diagnosing any detected issues with the LiDAR data, such as missing/ghost objects or range errors; and camera data, such as poor object localization. # C. Module 3: Environmental Reasoning Environmental reasoning module consists of two coordinated agents: one focused on detecting and characterizing Require: $\nu ( t )$ , $\mathcal { V } ( t - 1 )$ : sets of visual detections at times $t$ and $t - 1 , \mathcal { L } ( t ) , \mathcal { L } ( t - 1 ) ;$ : sets of LiDAR measurements at times $t$ and $t - 1$ , $\mathbf { O } _ { i } ( t )$ : position of object $i$ at time $t , \Delta t$ : time interval used for change detection in object positions 1: for all $v _ { i } ( t ) \in \mathcal { V } ( t )$ do 2: $\Delta v _ { i } ( t ) \ = \ v _ { i } ( t ) \ - \ v _ { i } ( t - 1 )$ 3: end for 4: for all $\ell _ { j } ( t ) \in \mathcal { L } ( t )$ do 5: $\Delta \ell _ { j } ( t ) = \ell _ { j } ( t ) - \ell _ { j } ( t - 1 )$ 6: end for 7: $\Delta _ { i , j } ( t ) ~ = ~ \parallel v _ { i } ( t ) ~ - ~ \ell _ { j } ( t ) \parallel$ 8: for all $\mathbf { O } _ { i } ( t )$ do 9: $\Delta \mathbf { O } _ { i } ( t ) = \mathbf { O } _ { i } ( t ) - \mathbf { O } _ { i } ( t - \Delta t )$ 10: end for environmental changes, and another dedicated to analyzing the causes of those changes. Working together, these agents provide a comprehensive understanding of the factors driving each observed environmental change as shown in Algorithm 2. 1) Environmental Reasoning: This agent identifies environmental changes by comparing current sensor readings with those from the previous timestamp. Let $\begin{array} { r l r } { { \mathcal V } ( t ) } & { { } = } & { \{ v _ { 1 } ( t ) , v _ { 2 } ( t ) , \ldots , v _ { m } ( t ) \} } \end{array}$ and $\begin{array} { r l } { \mathcal { L } ( t ) } & { { } = } \end{array}$ $\{ \ell _ { 1 } ( t ) , \ell _ { 2 } ( t ) , \ldots , \ell _ { n } ( t ) \}$ denote the vision and LiDAR detections at time $t$ , respectively. By analyzing differences between $\nu ( t )$ and $\mathcal { V } ( t - 1 )$ as well as $\mathcal { L } ( t )$ and $\mathcal { L } ( t - 1 )$ , the agent detects new, missing, or significantly moved objects. Detected changes are classified based on type (e.g., static vs. dynamic) and severity. For each change, the agent also evaluates cross-sensor consistency. Given an object seen by both sensors, let $v _ { i } ( t )$ and $\ell _ { j } ( t )$ denote the positions of the same object as perceived by the camera and LiDAR, respectively. The sensor agreement can be quantified by the Euclidean distance between these estimates: $$ \Delta _ { i , j } ( t ) = \| v _ { i } ( t ) - \ell _ { j } ( t ) \| $$ Note that a small $\Delta _ { i , j } ( t )$ indicates that the vision and LiDAR agree on the object’s position, whereas a large $\Delta _ { i , j } ( t )$ could signal sensor misalignment, calibration issues, or an actual abrupt environmental change that one sensor registers differently than the other. 2) Causal Analysis: This agent investigates the underlying causes of the changes identified above. It first retrieves the state of each relevant object from prior reasoning stages or raw sensor data, denoting the position (or state) of object $\mathbf { \chi } _ { i }$ at time $t$ as $\mathbf { O } _ { i } ( t )$ . It then looks at how each object’s state has evolved over a longer interval $\Delta t$ by computing and flags any object with a significant change $\Delta \mathbf { O } _ { i } ( t )$ for deeper analysis: $$ \Delta \mathbf { O } _ { i } ( t ) = \mathbf { O } _ { i } ( t ) - \mathbf { O } _ { i } ( t - \Delta t ) $$ For each flagged change, the agent infers plausible causes by analyzing temporal patterns (e.g., sudden vs. gradual), environmental cues (e.g., wind or collisions), and surrounding context (e.g., nearby object motion). It classifies each change as either self-moving (e.g., vehicles or pedestrians) or externally influenced (e.g., displaced by force), using cues like mobility features and motion behavior. The agent then compiles a causal report summarizing the changes, inferred origins, and confidence levels, enabling informed downstream decision-making with interpretable reasoning. # D. Module 4: Response Generation This module synthesizes outputs from previous agents to generate a prioritized response, each insight $a _ { i }$ is paired with a category $c _ { i }$ (e.g., safety, efficiency), forming the set $\boldsymbol { \mathcal { A } } =$ $\{ ( a _ { i } , c _ { i } ) \} _ { i = 1 } ^ { N }$ . A scoring function $\Psi ( a _ { i } , c _ { i } )$ evaluates urgency, and the highest-priority issue, $\hat { a }$ , can be identified as: $$ \hat { a } = \arg \operatorname* { m a x } _ { ( a _ { i } , c _ { i } ) \in \mathcal { A } } \Psi ( a _ { i } , c _ { i } ) $$ The agent then selects the best response $\phi ^ { * }$ from a candidate set $\Phi ( { \hat { a } } ) = \{ \phi _ { 1 } , . . . , \phi _ { M } \}$ by maximizing a utility function: $$ \boldsymbol { \phi } ^ { * } = \arg \operatorname* { m a x } _ { \phi _ { j } \in \Phi ( \hat { \boldsymbol { a } } ) } \mathrm { S c o r e } ( \phi _ { j } ) $$ The final response is: $$ \mathcal { R } = \left( \hat { a } , \phi ^ { * } , \mathcal { A } ^ { - } \right) $$ where $\mathcal { A } ^ { - } = \mathcal { A } \backslash \{ \hat { a } \}$ denotes secondary insights. This structured output can gather the top-priority issue, proposed action, and remaining considerations to support transparent and interpretable decision-making. # III. EXPERIMENTS # A. Datasets Due to the lack of public datasets for evaluating an agent’s understanding of driving environments, we introduce a new dataset collected from an autonomous vehicle in realworld scenarios [17]. As shown in Fig. 3, the vehicle was equipped with multiple sensors and a navigation system 2. All sensor data were time-synchronized for consistent multimodal observations. Sensor specifications are provided in Table I. Moreover, as summarized in Table II, our dataset covers three distinct driving routes: R1, R2, and R3. R1 spans 1277.76 meters and was recorded in a controlled environment, serving as the baseline scenario. The ego vehicle reached a maximum speed of $1 3 . 9 0 m / s$ with an average speed of $7 . 3 0 m / s$ , and only forward-facing images were captured. The environment dynamic level for R1 is qualitatively described as Small, reflecting relatively simple traffic conditions. R2, measuring 969.19 meters in length, features a loop around an urban square and is qualitatively described as having a Large environment dynamic level, indicating a more complex and active driving environment. The maximum and average speeds along R2 were $1 1 . 4 0 m / s$ and $4 . 1 7 m / s$ respectively. Compared to R1, R2 includes right and left-side camera views, providing a broader field of view. R3, at 1125.91 meters, introduces additional environmental complexity, with roadside obstructions and is qualitatively described as having a Medium environment dynamic level, indicating moderately active traffic with added structural challenges. The maximum speed recorded was $1 2 . 0 9 m / s$ , with an average speed of $4 . 2 9 m / s$ . Similar to R2, R3 captures views from the right, left, and front cameras. TABLE I SENSOR SPECIFICATION IN CHANG’AN UNIVERSITY XINDA AUTONOMOUS VEHICLE. Fig. 3. Data collection vehicle sensor configuration and satellite images of recorded driving routes. There are three routes being recorded in total at Chang’an University, Xi’an, China. Route 1 (R1) is shown in red trajectory, Route 2 (R2) is shown in purple trajectory, while Route 3 (R3) is shown in green trajectory. TABLE II DETAILED ATTRIBUTES FOR ROUTES R1, R2, AND R3. In addition, an enhanced detection method, combined with PointPillars architecture [18] and a clustering strategy, was used to perform real-time perception on LiDAR observation results and detect objects. # B. Task and Evaluation Metrics We define three primary tasks: (1) Object and Category Detection, (2) Vehicle Reasoning (LiDAR and visual understanding), and (3) Environmental Reasoning. Each task is validated by its contribution to scene understanding, decision-making, and system robustness, with results discussed in Section IV. For object identification task, we consider seven key categories: four-wheel vehicles (the principal motorized participants on roads), non-four-wheel vehicles (e.g., bicycles and scooters, which often pose higher risk due to less coverage), pedestrians (vulnerable road users who commonly receive priority), signs (official traffic instructions and regulations), fixed installations (permanent structures, barriers, or buildings), plants (vegetation that may obscure visibility or mark boundaries), and monitors (electronic displays or cameras supporting traffic supervision). This task is trained on datasets R2 and R3 and evaluated on R1, using precision, recall, and F1 as metrics; its importance lies in ensuring the accurate classification of objects critical for traffic safety. The vehicle-reasoning task include two tasks: a LiDAR understanding task, evaluated by comparing the model’s output with ground-truth labels in R2, and a vision-based reasoning task, assessed on R2 and R3, where misaligned camera views serve as distractors. These evaluations measure real improvements in perception accuracy and prevent false gains from random guessing. Finally, the environmental reasoning task tests the system’s ability to tell apart stationary objects from independently moving ones (like pedestrians), with improvements validated through better situational awareness, collision avoidance, and safer navigation in dynamic traffic. # C. Baseline Approaches For task 1, we benchmark five leading vision-language models including LLaMA-3.2-Vision-Instruct [19], GPT-4omini [20], Pixtra-large [21], GPT-4o [20], and Claude-3.7- Sonnet [22]—selected for their strong performance, diverse architectures, and proven effectiveness on vision tasks. For tasks 2 & 3, we adopt three baseline methods: ZeroShot [23], CoT [24], and CoT $^ +$ Self-Refine [25]. ZeroShot tests direct inference ability, CoT adds step-by-step reasoning, and $\mathrm { C o T + : }$ Self-Refine further improves reasoning through iterative refinement. # D. Reasoning Instructions Fig. 4 outlines structured annotation guidelines that define the expected format and content of a high-quality response. These guidelines emphasize three key aspects: (1) correctly identifying vehicles and other dynamic traffic elements (e.g., bicycles, buses), (2) highlighting relevant static road infrastructure such as lane markings, traffic signs, and signals, and (3) ensuring that descriptions are objective, concise, and free from subjective or irrelevant content. To assess the quality of the model’s outputs, we compare the generated descriptions against reference descriptions derived from these guidelines. The evaluation focuses on both content accuracy and coverage of key visual categories. Specifically, we extract five scene components from each output: Trees, Buildings, Vehicles, Pedestrians, and Signs. These categories are selected due to their relevance to road-scene understanding and their prevalence in standard autonomous driving datasets. Reasoning Setup: Reasoning experiments follow the multi-phase reasoning methodology described in Section II, the proposed DriveAgent is deployed to complete four sequential modules: Descriptive Analysis, Vehicle Reasoning, Environmental Reasoning, and Response Generation. For each phase, DriveAgent generates a response based on the intermediate input from the previous step, resulting in a total of four stepwise generations per input case. Evaluation is conducted at two critical points: (1) assessing the accuracy of the agent’s vehicle diagnostic reasoning, and (2) evaluating the accuracy of its environmental and causal reasoning. VLM Implementation Details: The VLM model in DriveAgent utilizes the LLaMA-3.2-Vision model (11B Parameter) as the foundation, where the vision tower is a pretrained LLaMA vision encoder and the language model is LLaMA-3.2 (11B). A learnable linear image projection layer is inserted between the vision encoder and the LLM to align visual features with the LLM’s input space. We fine-tune both the vision encoder and the LLM using Low-Rank Adaptation (LoRA) [26], while training the projection layer from scratch (i.e., no LoRA applied). All experiments are conducted on a server equipped with an NVIDIA H100 GPU. The model is optimized using the AdamW optimizer with an initial learning rate of $2 \times 1 0 ^ { - 4 }$ and a batch size of 2. We employ a cosine learning rate decay schedule with a warm-up ratio of 0.03. Training is performed for 10 epoch using instructionstyle supervision introduced in the VLM Instructions section, where each training sample is formatted as an instructionresponse pair that includes special <Image $>$ tokens to denote visual inputs. In practice, we construct structured JSONformatted prompts containing both textual instructions and placeholder tokens for images, and fine-tune the model with supervised instruction tuning on these multimodal prompts. # IV. RESULTS AND ANALYSIS # A. Object and Category Detection Performance In this subsection, we first evaluate the task 1 introduced at Section III-B. Table III illustrates the substantial performance gains achieved when training with structured annotation guidelines aimed at better and more accurate object identification. The baseline LLaMA-3.2-Vision-Instruct model achieves moderate performance (Precision $= 6 4 . 3 3 \%$ , Recall $= 3 5 . 2 6 \%$ , F1-score $= \ 4 5 . 5 5 )$ . However, once annotation guidelines are systematically applied, the VLM model in DriveAgent exhibits a significant leap in all key metrics—reaching a precision of $8 9 . 9 6 \%$ and an F1-score of 71.62, outperforming other models in the table. TABLE III PRECISION $( \% )$ , RECALL $( \% )$ , AND F1-SCORE FOR EACH BASELINE METHODS AND THE PROPOSED MODEL ON OBJECT AND CATEGORY DETECTION TASK. RED COLOR HIGHLIGHTS THE BEST PERFORMANCE. Fig. 5 shows that compared to the human annotator, DriveAgent is the only model that consistently detects monitors, while the other baselines mostly miss them, which is because overhead monitors are less conspicuous than ground-level objects. This improvement underscores the importance of precise, consistent labeling for training object detection systems. By removing ambiguity and ensuring uniform criteria for bounding boxes and class labels, the new annotations allow the model to learn object boundaries and distinctions more effectively. Consequently, DriveAgent demonstrates superior accuracy in localizing and identifying objects, validating that high-quality, structured annotation practices are crucial to achieving robust object identification performance. # B. Reasoning Performance Vehicle Reasoning: We first We first evaluate the Vehicle Reasoning task (LiDAR and Vision) introduced in in Section III-B, as shown in Table IV. For LiDAR reasoning, the Zero-Shot approach achieves moderate accuracy $( 4 7 . 5 0 \% - 6 5 . 0 5 \% )$ across routes, establishing a baseline for detecting sensor misplacement. CoT alone leads to substantial performance drops, suggesting basic sequential reasoning struggles with subtle errors. Adding Self Refine significantly improves accuracy, reaching $7 2 . 6 3 \%$ on R2 and $6 3 . 8 9 \%$ on R2-right. DriveAgent, however, achieves strong and stable performance, particularly on R2- left $( 6 9 . 9 0 \% )$ , demonstrating reliable LiDAR misplacement detection. Visual-based Object Description LiDAR-based Object Description Focus of Step Human Instructions VLM Generations LiDAR Data Human Instructions LLM Agent Generation i t ta 11 spsgb deteile descritirst if x1 is withinstopping 1 t Tag Size Relative -same check: near-lane 自品 中 s ! ret parking sign and dashed lane markings. Vehicle Reasoning Environmental Reasoning Human Instructions LLM Agent Generations Human Instructions LLM Agent Generations 武 Go i are not entirely reliable. Using the LiDAR changes,explainwhyeach object became hadblocked it in Snapshot 1. .…· vibledpsor 1 verdict. remains indicates an object moving independently. cluster has its own track → kept … …. … 逢馨 i Fig. 4. Overview of the multimodal reasoning pipeline used for driving scene understanding. Visual descriptions are generated from camera images, focusing on identifying traffic-related objects and maintaining objective scene summaries. LiDAR-based descriptions analyze object sizes and relative positions to assess driving risk. In the reasoning stages, LLM agents evaluate the correctness of sensor-based analyses (vehicle reasoning) and identif environmental changes over time (environmental reasoning). Human instructions and corresponding LLM generations are provided for each step, supporting robust, explainable autonomous driving assessments. Fig. 5. Distribution of object categories in the human-annotated ground truth versus each model’s predictions. Colour key: deep blue $\mathbf { \Sigma } = \mathbf { \Sigma }$ fixed installations, orange $\mathbf { \Sigma } = \mathbf { \Sigma }$ four-wheel vehicles, grey $\mathbf { \Sigma } = \mathbf { \Sigma }$ non-four-wheel vehicles, yellow $\ c =$ plants, and light blue $\mathbf { \Sigma } = \mathbf { \Sigma }$ monitors. For Vision reasoning, detecting misaligned cameras is even more challenging. Zero-Shot and CoT show very low accuracies on left and right views. In contrast, DriveAgent achieves notable gains, including $9 6 . 8 4 \%$ accuracy on R2, and clear improvements across left and right variants $( 5 8 . 2 5 \%$ and $7 1 . 3 0 \%$ ), confirming that modality-specific tuning is crucial for visual sensor reasoning. Environmental Reasoning: At last, we evaluate the task 3 introduced in Section III-B. The evaluation of environmental reasoning performance is based on the agent’s ability to detect independently moving objects by comparing two selected timestamps. As shown in Table IV, the Zero-Shot performance is low $( 3 7 . 8 9 \%$ and $3 6 . 1 9 \%$ ), indicating that without any additional reasoning cues the agent struggles TABLE IV REASONING ACCURACY $( \% )$ IS REPORTED ACROSS MODALITIES AND REGIONS, WITH EACH TASK PRESENTING RESULTS FROM SEVERAL PROMPTING METHODS. THE LABELS R-LEFT\* AND R-RIGHT\* DENOTE THE LEFT- AND RIGHT-SIDE CAMERA VIEWS OF THE SAME ROUTE; THESE VIEWS ACT AS DISTRACTORS FOR THE VEHICLE-REASONING SUBTASK. RED COLOR HIGHLIGHTS THE BEST PERFORMANCE. with temporal object differentiation. The CoT method significantly improves performance, achieving accuracies of $5 6 . 8 4 \%$ and $6 2 . 8 6 \%$ . However, the performance of $\mathrm { C o T + }$ Self Refine strategy offers mixed results, where the performance drops to $4 3 . 1 6 \%$ for one set and recovers partially to $5 6 . 1 9 \%$ for the other, suggesting that the refinement process may not always synergize effectively with the inherent sequential reasoning of CoT in this task. Notably, our proposed DriveAgent model outperforms all baselines, obtaining the highest accuracies of $5 8 . 9 5 \%$ and $6 5 . 7 1 \%$ respectively. These results underscore the importance of a dedicated, well-tuned approach for integrating temporal and spatial reasoning, which is critical for accurately identifying independently moving objects in dynamic environments.
We introduce DriveAgent, a novel multi-agent autonomous driving framework that leverages large language model (LLM) reasoning combined with multimodal sensor fusion to enhance situational understanding and decision-making. DriveAgent uniquely integrates diverse sensor modalities-including camera, LiDAR, GPS, and IMU-with LLM-driven analytical processes structured across specialized agents. The framework operates through a modular agent-based pipeline comprising four principal modules: (i) a descriptive analysis agent identifying critical sensor data events based on filtered timestamps, (ii) dedicated vehicle-level analysis conducted by LiDAR and vision agents that collaboratively assess vehicle conditions and movements, (iii) environmental reasoning and causal analysis agents explaining contextual changes and their underlying mechanisms, and (iv) an urgency-aware decision-generation agent prioritizing insights and proposing timely maneuvers. This modular design empowers the LLM to effectively coordinate specialized perception and reasoning agents, delivering cohesive, interpretable insights into complex autonomous driving scenarios. Extensive experiments on challenging autonomous driving datasets demonstrate that DriveAgent is achieving superior performance on multiple metrics against baseline methods. These results validate the efficacy of the proposed LLM-driven multi-agent sensor fusion framework, underscoring its potential to substantially enhance the robustness and reliability of autonomous driving systems.
[ "cs.RO", "cs.DB" ]
# I. INTRODUCTION Reinforcement Learning (RL) has achieved great success in solving decision-making tasks, such as gaming AI [1], [2], [3], autonomous driving [4], [5], [6], robotic manipulation [7], [8], [9], etc. The reward function plays a pivotal role in policy learning of RL. As the complexity of the task increases, it becomes difficult and time-consuming to design suitable reward functions carefully. Preference-based Reinforcement Learning (PbRL) is an excellent solution to the problem of reward engineering. Instead of carefully designing a reward function in advance, it uses the human feedback between two behavior segments to learn reward models that match human preferences, thereby guiding the agent to act on human desires. However, PbRL methods usually suffer from inefficient feedback, i.e., large numbers of meaningless data pairs are selected and hard to be labeled for recovering reward models. To address this problem, various query selection schemes have been proposed, such as entropy-based query selection [10], [11], disagreement-based query selection [12], [13], [14], policy-aligned query selection [15], etc. The goal is to select more informative or accountable queries for efficient reward or policy learning. Another line of work focuses on policy learning, including pre-training policy unsupervised to learn diverse behaviors [11], designing bi-level optimization for both reward and policy [14], exploration based on uncertainty of learned reward models [16], etc. This kind of methods mainly improve learning efficiency through supplying more diverse samples or optimizing the Q-function online. While the above carefully designed query schemes or optimizations have a good try in seeking to maximize reward models consistent with human preferences, they still have trouble selecting meaningful segments for easy preference labeling and as one kind of RL, how to make the exploration to be preference related is less focused, which may hinder the policy learning of PbRL, even with feedback labels. In this paper, we present a novel efficient query Selection and preferENce-guIded explORation (SENIOR) method to improve both feedback- and exploration-efficiency of PbRL in robot manipulation tasks. Our main contribution is twofold: First, we introduce a Motion-Distinction-based Selection scheme (MDS). By evaluating the state density and motion direction of the robots in the segment, easily comparable and meaningful segment pairs with apparent motion distinction will query human preferences for highquality labels to facilitate reward learning. Second, we design a Preference-Guided Exploration (PGE) mechanism that utilizes human preferences to encourage agents to visit unfamiliar, human-preferred states for more meaningful exploration in the form of intrinsic rewards. Through the synergy between MDS and PGE, our method could significantly accelerate the progress of reward and policy learning. Experiments show that SENIOR outperforms other methods in feedback-efficiency and policy convergence speed in both simulated and real-world complex robot manipulation tasks. # II. RELATED WORK # A. Reinforcement Learning from Human Preferences Numerous studies have highlighted the critical role of human preferences in reinforcement learning (RL) [10], [11], [13], [14], [16], [17], [18], [19]. However, addressing complex tasks often requires significant human preferences, which could be expensive to collect. Therefore, selecting informative pairs of behavior segments for querying preferences is essential to enhance feedback-efficiency. Existing query selection schemes include disagreement-based query selection [12], entropy-based query selection [11], and policy-aligned query selection [15]. Disagreement-based query selection samples segments based on reward uncertainty. Entropy-based query selection samples segments with maximum $\mathbf { k }$ -NN distance between states to increase state entropy. This two kinds of approaches improve feedback quality by selecting informative segments, and achieve better performance on complex tasks compared to randomly sampling segments [11]. To solve the query-policy misalignment problem, [15] proposed QPA method which designed a policy-aligned query selection scheme by sampling segments from the recent experience and a hybrid experience replay mechanism to ensure the policy updated more frequently on the samples human preferred. Fig. 1: Illustration of SENIOR. PGE assigns high task rewards for fewer visits and human-preferred states to encourage efficient exploration through hybrid experience updating policy, which will provide query selection for more valuable taskrelevant segments. MDS select easily comparable and meaningful segment pairs with apparent motion distinction for highquality labels to facilitate reward learning, providing the agent with accurate rewards guidance for PGE exploration. During training, MDS and PGE interact and complement each other, improving both feedback- and exploration-efficiency of PbRL. Other works improve the performance by policy learning [11], [14], [16], [17], [18], [20]. [11] proposed PEBBLE which combines unsupervised pre-training and the technique of relabeling experience to improve feedback-efficiency. [14] designed one bi-level optimization PbRL framework MRN (Meta-Reward-Net). Through updating Q-function and policy via reward models in inner loop and optimizing the reward function according to Q-function performance on preference data in outer loop, MRN exceeds other methods when few preference labels are available. [16] incorporates uncertainty from learned reward models as an exploration bonus to achieve high feedback-efficiency. Besides, some works also focus on expanding preference labels with unlabeled queries [13], [19], [21]. [13] leverages unlabeled segments and data augmentation to generate new labels, achieving efficient feedback with fewer preferences. [19] incorporates a triplet loss that directly updates the reward model using unlabeled trajectories to improve reward recovery. [21] proposes selftraining augmentation to generate pseudo-labels, combined with peer regularization, to solve the similarity trap and obtain confident labels without noise. In this paper, we evaluate the motion information and direction similarity in trajectory segments and tend to select segment pairs with apparent motion distinction, which are more task-related meaningful and easy for humans to compare. Our method facilitates high-quality labels and feedback-efficiency. # B. Exploration in RL The trade-off of exploitation and exploration is pivotal for RL. Much work has shown that appropriate exploration can accelerate policy learning [22], [23], [24], [25]. Exploration methods can be categorized into random exploration and intrinsic motivation. Random exploration includes $\epsilon$ -greedy [23], action space noise [24], [26] and parameter space noise [25], [27]. The main ideas of them are to randomly change action output to access unknown states. Intrinsic motivation methods can be broadly categorized as: count-based [28], [29], [30], curiosity-based [31], [32], [33], and state entropybased [11], [34], [35]. Count-based methods usually use a density model to fit pseudo-counts, and intrinsic rewards are high for state spaces with fewer visits. Curiosity-based methods calculate the intrinsic rewards by evaluating whether the states are familiar. [31] generates a curiosity reward based on the agent’s ability to predict the consequences of actions, which encourages the agent to learn skills that might be useful later. [32] trains predictive models and motivates the agent to explore by maximizing the disagreement of these models, allowing the agent to learn skills in self-supervised learning without extrinsic rewards. State entropy-based methods generally maximize state entropy to encourage agents to explore. [34] aims to learn a single exploratory policy with high state entropy that matches the state marginal distribution to a given target state distribution for faster exploration. [35] employs random encoders and the k-NN state entropy estimator to facilitate efficient exploration in high-dimensional observation spaces. The idea of our method introduces human preferences into intrinsic reward exploration, encouraging agents to visit unfamiliar, humanpreferred states. Our method could continuously supply highvaluable states for policy training. # III. BACKGROUND This section introduces related concepts of Preferencebased Reinforcement Learning. We consider a standard Markov Decision Process (MDP) [36]. In discrete time, an agent interacts with the environment can be described as (1) at each time step $t$ , the agent performs action ${ \bf a } _ { t }$ based on the current environment state $\mathbf { s } _ { t }$ and policy $\pi _ { \phi } ( \mathbf { a } _ { t } | \mathbf { s } _ { t } )$ , and (2) the environment state shifts to $\mathbf { s } _ { t + 1 }$ , and the reward $r ( \mathbf { s } _ { t } , \mathbf { a } _ { t } )$ is returned to the agent. The goal of RL itshe oculreraern patoel,i ey.gt.o, $\begin{array} { r } { \mathscr { R } _ { t } = \sum _ { i = 0 } ^ { T } \gamma ^ { i } r ( \mathbf { s } _ { t + i } , \mathbf { a } _ { t + i } ) } \end{array}$ ,wawridtho $t$ denoting the current time step $T$ denoting the time domain, and $\gamma \in [ 0 , 1 )$ denoting the discount factor. PbRL utilizes human preference labels between behavior segments of an agent to learn a reward function $\hat { r _ { \psi } }$ , which is used for RL policy $\pi _ { \phi }$ learning [12], [37], [38]. Specifically, $\sigma$ denotes a state-action sequence $\left\{ ( \mathbf { s } _ { k } , \mathbf { a } _ { k } ) , \dotsc , ( \mathbf { s } _ { k + H } , \mathbf { a } _ { k + H } ) \right\}$ , which is typically a short part of whole trajectory. The human expert provides a preference $y$ on two segments $( \sigma ^ { 0 } , \sigma ^ { 1 } )$ , e.g., $y \in$ $\{ ( 0 , 1 ) , ( 1 , 0 ) , ( 0 . 5 , 0 . 5 ) \}$ , indicating $\sigma ^ { 1 }$ preferred, $\sigma ^ { 0 }$ preferred, and incomparable, respectively. Each feedback label is stored as a triple $( \sigma ^ { 0 } , \sigma ^ { 1 } , y )$ in the preference dataset $\mathcal { D }$ . Based on the Bradley-Terry model [39], the preference prediction are calculated by the learned reward model $\hat { r _ { \psi } }$ : $$ P _ { \psi } [ \sigma ^ { 1 } \succ \sigma ^ { 0 } ] = \frac { \exp \sum _ { t } \hat { r } _ { \psi } ( \mathbf { s } _ { t } ^ { 1 } , \mathbf { a } _ { t } ^ { 1 } ) } { \sum _ { i \in \{ 0 , 1 \} } \exp \sum _ { t } \hat { r } _ { \psi } ( \mathbf { s } _ { t } ^ { i } , \mathbf { a } _ { t } ^ { i } ) } , $$ where $\sigma ^ { 1 } \succ \sigma ^ { 0 }$ refers to the fact that $\sigma ^ { 1 }$ is more consistent with the human expectations than $\sigma ^ { 0 }$ . This model assumes that the cumulative reward sum of the segment exponentially determines the probability that a human expert prefers the segment. Learning the reward model becomes a binary classification problem in the case of supervised learning, i.e., keeping the predicted preferences of the model consistent with humans. Thus the reward function $\hat { r } _ { \psi }$ parameterized by $\psi$ is updated to minimize the following cross-entropy loss: $$ \begin{array} { r l } & { \mathcal { L } ^ { \mathrm { R e w a r d } } = - \underset { ( \sigma ^ { 0 } , \sigma ^ { 1 } , y ) \sim \mathcal { D } } { \mathbb { E } } \left[ y ( 0 ) * \log P _ { \psi } [ \sigma ^ { 0 } \succ \sigma ^ { 1 } ] \right. } \\ & { \qquad \quad \left. + y ( 1 ) * \log P _ { \psi } [ \sigma ^ { 1 } \succ \sigma ^ { 0 } ] \right] . } \end{array} $$ # IV. METHOD In this section, we systematically introduce our method SENIOR, including Motion-Distinction-based Selection (MDS) and Preference-Guided Exploration (PGE), which incorporates two ways to improve both the feedback- and exploration-efficiency of PbRL. Fig. 2: Robot apple grab task. MDS tends to select trajectories that have more motion information and are easy to compare (a) rather than high-density trajectories with less motion information (b) or trajectories with high similarity that are difficult to compare (c). # A. Motion-Distinction-based Selection (MDS) While uncertainty-based (using disagreement or entropy) query selection schemes improve feedback-efficiency [11], they still suffer from the difficulty of selecting meaningful segment pairs for which humans can confidently compare and provide preferences. For instance, the robot in one segment of the trajectory has no apparent movement. This kind of segment may be selected because of high uncertainty. However, it has less help for task learning and may cause incorrect labels. So, it is necessary to analyze more motion information in the segment except uncertainty. Based on this idea, we introduce a new query selection scheme MDS. In MDS, we first calculate the density of states for each behavior trajectory using Kernel Density Estimation (KDE) [40]. As shown in Fig. 2, a high-density trajectory (Fig. 2b) means the robot always stays around one state, and a low-density trajectory (Fig. 2a, 2c) means containing more motion information. We tend to select low-density pairs of trajectories. With this principle, we design a motion-score metric $m$ for each segment pair: $$ m = \frac { 1 } { \sum _ { \mathbf { p } _ { t } \in \sigma ^ { 0 } } \hat { f } _ { \sigma ^ { 0 } } ( \mathbf { p } _ { t } ) + \sum _ { \mathbf { p } _ { t } \in \sigma ^ { 1 } } \hat { f } _ { \sigma ^ { 1 } } ( \mathbf { p } _ { t } ) } , $$ $$ { \hat { f } } _ { { \boldsymbol { S } } } ( \mathbf { p _ { t } } ) = { \frac { 1 } { n h } } \sum _ { i = 1 } ^ { n } K ( { \frac { \mathbf { p _ { t } } - \mathbf { p } _ { i } } { h } } ) , \mathbf { p _ { i } } \in { \boldsymbol { S } } , $$ where $( \sigma ^ { 0 } , \sigma ^ { 1 } )$ denotes a segment pair sampled from replay buffer $\boldsymbol { B }$ , and $\mathbf { p } _ { t }$ denotes the position of the end-effector in $\mathbf { s } _ { t }$ . $\hat { f } _ { S } ( \cdot )$ denotes an estimation of state density in one segment. $K$ is the Gaussian kernel function, $n$ is the length of the segment, and $h$ is the bandwidth. This process would select segment pairs with high $m$ denoted as $( \sigma _ { m } ^ { 0 } , \sigma _ { m } ^ { 1 } )$ . To further facilitate comparison and achieve more valuable feedback, we emphasize the difference between $\sigma _ { m } ^ { 0 }$ and $\sigma _ { m } ^ { 1 }$ Compared trajectories in Fig. 2a with that in Fig. 2c, it becomes easy to label them when their motion directions are distinct, which also brings the result of finding out the task-related segments implicitly. Inspired by this, we design another distinction-score metric $d$ to evaluate the similarity of motion direction $\bigl ( \mathbf { v } _ { m } ^ { 0 } , \mathbf { v } _ { m } ^ { 1 } \bigr )$ between $\sigma _ { m } ^ { 0 }$ and $\sigma _ { m } ^ { 1 }$ . Here $\bigl ( \mathbf { v } _ { m } ^ { 0 } , \mathbf { v } _ { m } ^ { 1 } \bigr )$ is the eigenvector corresponding to the largest eigenvalue by applying Principal Component Analysis (PCA) to the states within the segment $\sigma _ { m } ^ { 0 }$ and $\sigma _ { m } ^ { 1 }$ . The metric $d$ is calculated as cosine similarity: $$ d = \mathbf { v } _ { m } ^ { 0 } \cdot \mathbf { v } _ { m } ^ { 1 } . $$ In general, MDS first randomly samples segment pairs $p$ from the replay buffer $\boldsymbol { B }$ . Then, retains segment pairs $q$ from $p$ with the topest high $m$ score. Finally, obtains per-session segment pairs from $q$ with low $d$ . Segment pairs selected by MDS contain more motion information and are easier to compare for humans, thus accelerating reward learning. # B. Preference-Guided Exploration (PGE) We propose a novel preference-guided exploration method to provide more diverse and useful segments for the query selection in PbRL. The motivation of PGE is to encourage the exploration of states that humans favor but are unfamiliar to agents. In detail, we set up an additional curiosity buffer $B _ { c u r }$ and periodically sample data $\mathcal { E }$ from the replay buffer $\boldsymbol { B }$ to compute exploration KDE $\hat { f } _ { \mathcal { E } }$ with Eq. (4) and add $\mathcal { E }$ to $B _ { c u r }$ . Through comparing $\hat { f } _ { \mathcal { E } }$ with the preference KDE $\hat { f } _ { \mathcal { P } }$ calculated from the sample data of preference dataset $\mathcal { D }$ , we would give higher rewards to the states with high preference density but less visited. The intrinsic reward is calculated by: component on learning performance. Finally, we show the experimental results on four real-world tasks with a physical robot. # A. Simulation Environment We evaluate SENIOR on six robotic continuous control tasks in Meta-World [41], including Door Lock, Window Close, Handle Press, Window Open, Door Open and Door Unlock. Similar to previous work [10], [14], [16], to evaluate the tasks quickly, we consider using a scripted teacher that evaluates preferences via the Meta-World environment reward function rather than employing a real human expert. This setup implies that we maintain absolute preferences for those segments that receive higher rewards in the environment. A preference learning approach based on this setup specifies an upper-performance limit, i.e., the agent can directly utilize the environmental reward function for policy learning. In this paper, we selected SAC [42] method. SENIOR can be combined with any off-policy PbRL algorithm. To verify the performance, we implement SENIOR on PEBBLE [11] (P-SENIOR) and MRN [14] (M-SENIOR) and compared our method with five existing typical methods, including PEBBLE, MRN, RUNE [16], M-RUNE (RUNE with MRN), QPA [15]. # B. Implementation Details For all methods, we use unsupervised pre-training proposed by PEBBLE and the same hyper-parameters and network architectures as in original papers [14], [15], [16]. Similar to RUNE, we fine-tune $\beta _ { 0 }$ and $\rho$ by setting $\beta _ { 0 } = 0 . 1$ , $\rho \in \{ 0 . 0 1 , 0 . 0 0 1 , 0 . 0 0 0 1 \}$ , and report the optimal results. The feedback settings, optimization update frequency and other parameters can be found in the Appendix. We ran all algorithms independently five times for each task and repeated ten episodes for each evaluation. We reported the mean success rate and standard deviation. The experiments were run on a machine with four NVIDIA A800 GPUs. $$ r _ { i n t } ( \mathbf { p } _ { i } ) = \frac { g ( \mathbf { p } _ { i } ) - \operatorname* { m i n } _ { \mathbf { \mu } \in \mathcal { B } _ { c u r } } g ( \mathbf { p } _ { i } ) } { \operatorname* { m a x } _ { \mathbf { \mu } \mathbf { p } _ { i } } g ( \mathbf { p } _ { i } ) - \operatorname* { m i n } _ { \mathbf { \mu } \mathbf { p } _ { i } } g ( \mathbf { p } _ { i } ) } , $$ $$ g ( \mathbf { p } _ { i } ) = \frac { \hat { f } _ { \mathcal { P } } ( \mathbf { p } _ { i } ) } { \hat { f } _ { \mathcal { E } } ( \mathbf { p } _ { i } ) } , $$ Combined with the learned extrinsic rewards $\hat { r } _ { \psi }$ , we can define the task rewards as: $$ r _ { c u r } ( \mathbf { s } _ { i } , \mathbf { a } _ { i } ) = \hat { r } _ { \psi } ( \mathbf { s } _ { i } , \mathbf { a } _ { i } ) + \beta _ { t } \cdot r _ { i n t } ( \mathbf { p } _ { i } ) , $$ where $\beta _ { t } ~ = ~ \beta _ { 0 } ( 1 - \rho ) ^ { t }$ , a hyper-parameter that decays exponentially over time and controls the trade-off between exploration and exploitation of the agent. $\rho$ is the decay rate. # V. EXPERIMENTS In this section, we first introduce the RL simulation environment for our experiments. Then, our method is compared to five baseline methods in terms of sample- and feedbackefficiency. Ablation experiments show the importance of each # C. Results Fig. 3 illustrates task success rate curves during training. With less feedback budget for all tasks, our methods achieved the best performance and convergence speed compared to others. As shown in Table I, M-SENIOR has a final average success rate of $9 7 \%$ on the six tasks and P-SENIOR has $8 6 \%$ . M-RUNE, RUNE and QPA only achieve $70 \%$ , $60 \%$ and $5 3 \%$ respectively. M-SENIOR reaches averagely $8 8 \%$ success rate at 500K steps, which far exceeds other methods at 500K, even until 1000K steps. Especially in Door Open task, high-quality feedback provided by SENIOR is critical for sustained performance improvement, and M-SENIOR ultimately outperforms other methods by $40 \%$ . Without bilevel optimization, P-SENIOR may not perform as well as M-SENIOR (In Window Open, the success rate $( 6 8 \% )$ is lower than M-RUNE $( 7 8 \% ) \%$ , but in most cases, P-SENIOR still outperforms the baselines, notably in improving the average performance by $3 3 \%$ compared to PEBBLE. This further verifies the effectiveness of our method. SAC PEBBLE MRN RUNE M-RUNE QPA P-SENIOR M-SENIOR 100 100 100 100 100 100 680 680 4680 4680 680 680 40 40 40 40 20 S0 20 0 .0 0.2 0.4 0.6 0.8 1. 0 .0 0.2 0.4 0.6 0.8 1.0 0 .0 0.2 0.4 0.6 0.8 1.0 0 .0 0.2 0.4 0.6 0.8 0 .0 0.2 0.4 0.6 0.8 1.0 0 .0 0.2 0.4 0.6 0.8 1.0 Environment Steps ×10 ×10 Environment Steps ×10 Environment Steps Environment Steps ×106 Environment Steps ×106 (a) Door Lock (b) Window Close (c) Handle Press (d) Window Open (e) Door Open (f) Door Unlock (feedback $\scriptstyle = 2 5 0$ ) (feedback=250) (feedback=250) (feedback $_ { = 2 5 0 }$ ) (feedback=1000) (feedback=1000) TABLE I: Comparison of success rates for six tasks at 500K and 1000K steps Fig. 3: Learning curves on six robotic manipulation tasks as measured by success rate. The solid line and shaded regions represent the mean and standard deviation, respectively, across five runs. Fig. 4: Comparison of final success rates for different feedback budgets on Door Lock and Window Open. Fig. 5: Ablation study on six tasks as measured by success rate. The final result ran independently five times. To verify the feedback-efficiency, we studied the final performance of different methods under 150, 250, 500 and 1000 feedback budgets in Door Lock and Window Open tasks. As shown in Fig. 4, it is evident that our method achieves the highest success rate in all settings and could increase at least $4 \times$ feedback-efficiency. For example, MSENIOR reaches nearly $100 \%$ success rate with only 250 feedbaMc-kS oIOn Door Lock, but other methods require 1000 feedback or more. Moreover, the performance of M-SENIOR improves continuously when feedback increases, which is not always the case for other methods. This is because SENIOR can consistently select helpful feedback, whereas other query selection schemes may introduce noise feedback and impair reward learning during training. # D. Ablation Experiment We also performed ablation experiments on the six tasks to measure the effect of each component (MDS and PGE) in SENIOR. Fig. 5 shows the results under the same settings as above. We can see that w/o PGE and w/o MDS all performed better than MRN, and the combination of both has the highest learning efficiency. This shows that each component of our method is helpful in finding out valuable samples (high-quality feedback labels or exploration states) during training. In some cases, one component may greatly improve the performance, such as MDS mainly guided the policy learning in Door Lock, PGE performed well in Handle Press. Generally, the synergy between the two components is the key to the success of our method. Influence of Feedback Quality. To further show the informative and meaningful segments are important in PbRL, we study the effect of feedback quality on reward and policy learning. We compared the performance of MDS with QPA in Door Lock and Handle Press tasks. During training, we added Feedback Noise (-FN, $10 \%$ wrong feedback) and Feedback Filter (-FF, neglect queries with low environment rewards) to MDS and QPA respectively. As shown in Fig. 6, the performance of MDS-FN and QPA-FN decreases dramatically. This shows the noise feedback seriously affects the performance, which on the other hand means that our MDS has the ability to achieve high-quality feedback and align the query with policy as QPA. For MDS-FN and QPAFN, we can see that the success rate of MDS-FN becomes lower than that of MDS in both tasks, but QPA-FN even has a higher success rate than QPA in Handle Press. This shows that our method could effectively and stably use the low environment reward samples to improve the performance. Fig. 6: Influence of feedback quality on Door Lock and Handle Press tasks. Fig. 7: Visualization of state visitation distribution in the Door Lock task environment during training. The red dot marks the lock position and bright yellow dots indicate the explored states. Preference guidance for Exploration. To better analyze our exploration mechanism, we visualized the state visitation distribution of the robot in the Door Lock task during training. We show the performance of PGE and RUNE respectively in Fig. 7. Bright yellow dots indicate the explored states and the red indicates the lock position. It can be seen clearly that PGE becomes focusing on exploring areas of the lock preferred by humans in 200K steps, but RUNE has no apparent exploration around the lock. The reason is that RUNE prioritizes exploring areas with high reward uncertainty and does not supply more task-relevant information. Our PGE could better converge the exploration towards the task goal under the guidance of human preference. # E. Deployment on a Physical Robot We deployed the policies on the Door Open, Door Close, Box Open and Box Close in the real world, as shown in Fig. 8. To maintain consistency between the simulation and the real setup, we first modified the simulation environment of these tasks in Meta-World based on the real environment and retrained the policies. Since the policy output is the increment of the end-effector of the robotic arm, the policy was deployed on the real-world UR5 directly without finetuning. We use the Aruco markers and depth camera to get the object’s pose. We compared PEBBLE, MRN, RUNE, M-RUNE, QPA and M-SENIOR, selected the optimal policy among all random seeds, and repeated the experiment 20 times. The results are shown in Table II. M-SENIOR achieved the highest success rate on all tasks, indicating that our method is more applicable regarding the noise of realworld input when transferring from simulation to real. Fig. 8: Execution sequence of policy on real-world UR5 robotic arm for Door Open (a), Door Close (b), Box Open (c) and Box Close (d) tasks. TABLE II: Success rate of simulation and real experiments on four tasks
Preference-based Reinforcement Learning (PbRL) methods provide a solution to avoid reward engineering by learning reward models based on human preferences. However, poor feedback- and sample- efficiency still remain the problems that hinder the application of PbRL. In this paper, we present a novel efficient query selection and preference-guided exploration method, called SENIOR, which could select the meaningful and easy-to-comparison behavior segment pairs to improve human feedback-efficiency and accelerate policy learning with the designed preference-guided intrinsic rewards. Our key idea is twofold: (1) We designed a Motion-Distinction-based Selection scheme (MDS). It selects segment pairs with apparent motion and different directions through kernel density estimation of states, which is more task-related and easy for human preference labeling; (2) We proposed a novel preference-guided exploration method (PGE). It encourages the exploration towards the states with high preference and low visits and continuously guides the agent achieving the valuable samples. The synergy between the two mechanisms could significantly accelerate the progress of reward and policy learning. Our experiments show that SENIOR outperforms other five existing methods in both human feedback-efficiency and policy convergence speed on six complex robot manipulation tasks from simulation and four real-worlds.
[ "cs.RO", "cs.AI" ]
# 1. Introduction While the Semantic Web and ontology engineering are still fundamental as common languages to exchange data (mainly related to the ‘Interoperable’ of the FAIR principles [1]), we have seen some different recent trends in networked and shared knowledge, reflecting the difficulties that practitioners still experience with dealing with these approaches. For instance, while domains like life science still benefit from precise annotations based on OWL ontologies [2], complementary models such as schema.org and Bioschemas [3]  are becoming popular as ‘lightweight’ data models, which are easier to use and fit well use cases such as harmonising large and heterogeneous datasets or making them visible on search engines and accessible via the Web [4]. Another emerging trend, which has arisen in domains like data intelligence or machine learning, is the adoption of labelled property graphs (LPG), which are the basis of graph databases [5] and other frameworks [6,7]. With respect to RDF triples, the LPG graphs are less fine-grained, being able to keep together an entity's properties in a single node, and most importantly, they are more expressive in representing ‘binary relations with annotations’ (see Figure 1). The proliferation of LPG-based technologies has stimulated collective efforts to standardise (again…) at least the query languages, with examples like Open Cypher [8], later turned into GQL [9], or the Gremlin/TinkerPop framework [10]. In the authors’ experience, LPG-based data management is not an alternative to more ‘traditional’ Linked Data approaches, as the two have a set of complementary advantages and disadvantages [11]. For instance, while systems such as Neo4j [12] can offer fast and scalable access to LPG knowledge graphs by means of the expressive Cypher language, they do not have reference/standard formalisms for data exchange, nor does a standard exist to represent LPG schemas and advanced schema-related entities, such as ontologies. Both these aspects are the focus of the Semantic Web stack and existing literature shows that these two approaches can be usefully integrated [11,13,14]. A consequence of this is the need for bridging these two worlds, in order to obtain good integrations and benefit from the best of the two. In this paper, we present rdf2pg, an extensible framework to convert RDF graph data into various target LPG graph formats and graph databases by means of a user-defined mapping between the source RDF data and the target LPG. As we will show, this has the main advantage of allowing for a sensible and domain-aware mapping between these two graph data models that, though very similar, have significant ‘natural’ ways to represent certain semantics (e.g., relations with properties vs reified statements). Another objective of this paper is to explore, both qualitatively and quantitatively, the use of different graph databases and query languages, when used to store data that are conceptually the same and are aligned to the same conceptual model. We show how our rdf2pg tool makes such alignment possible and we base our analysis on plant biology datasets, which is a typical use case for which the knowledge graph model fits very well [15–17]. # 1.1. Motivational use cases In this section, we describe the use cases that motivated the development of the rdf2pg tool and the management of respective datasets with graph databases and their query languages. KnetMiner. The rdf2pg framework is an evolution of the rdf2neo tool [13]. The need for the latter was born within the KnetMiner project [17,18], a platform that provides functionality to explore knowledge about molecular biology. KnetMiner offers the end users an easy-to-use web application to quickly search genes of interest and related entities (e.g., diseases, biological processes, scientific literature), as well as to visualise them in various forms, including knowledge networks. This is based on knowledge graphs that are built starting by integrating many useful public data sources. The same data that powers the web application are also available programmatically, in the form of a specific web API [19], a SPARQL endpoint, and a Neo4j data endpoint (that is, Cypher access via the BOLT protocol and the Neo4j browser). The KnetMiner team decided to use (and share) both RDF-based data and Neo4j due to the complementary pros and cons that the two have, both in general and for the specific platform needs. For example, after many years of varying success, RDF and the Semantic Web stack are still reliable standards to integrate data, being particularly suitable for sharing schemas and ontologies [20] and for operations such as automatically merging datasets referring to the same Uniform Resource Identifiers (URIs), ontology terms and public identifiers. On the other hand, many developers and bioinformaticians find Cypher and Neo4j an easier technology to work with. As an example of the latter, they use Cypher to define ‘semantic motifs’, which are graph patterns capturing chains from genes to relevant entities [21] (e.g., gene->protein->bio-process->article-mention), so that the related entities can be exploited to realise application-level functionality like displaying semantic associations between genes of interest. Another advantage given by Neo4j is that its ecosystem offers useful tools to manage and analyse the data, such as Bloom [22] or NeoDash [23]. Recently, the latter has been used to summarise the general characteristics of KnetMiner datasets [24,25]. Enterprise Semantic ETL. This use case deals with the integration of data coming from different information systems within different enterprise domains, including health and insurance, realised by NTT Data in collaboration with the University of Zaragoza, within the project ‘Semantic Data Management in Knowler’. Here, we describe the main project aspects by considering the non-disclosure requirements set by the companies involved. One of the seed domains was the integration of all the inhouse information about the ongoing projects, the employees and their skills. Such information was scattered across various underlying information systems (usually following different governance policies). To integrate such information, a knowledge graph was built out from many different structured sources, designing a Semantic ETL (Extract Transform-Load) pipeline. Each source (usually structured) followed a different model and, in order to include everything together under RDF format, many times reification of non-binary relationships must be applied. In our initial domain, we had Employees related to their Skills (both soft and hard skills, such as Technologies) and their particular levels achieved (evaluated within the company). Thus, in RDF, we needed to reify such relationships in order not to lose any information in the integration process. Moreover, once we have the triple store completely populated, it was especially convenient to have all the flexibility required to build views on property graphs, possibly materialising traversals over different property paths. For example, we could materialise the time that an employee has been working on a particular kind of project and directly include that in the property graph appropriately. In this ETL, we used an ontology as an integration umbrella to bring together data from different sources. For information integration purposes, we used RDF triple stores, which are particularly effective at handling Linked Data [26,27]. For other tasks, such as building an adaptive presentation layer, we used property graph databases, mainly because internal tests showed their better performance. We designed the pipeline in a way that ensures high flexibility in building the presentation views. Moreover, we also paid attention to the particular requirements in enterprise scenarios, where software licence costs and avoidance of vendor-locking are two factors which might lead to project failure. Those two risks were avoided by the adoption of standard formats and by using the property graph as a general and unified data model. We chose GraphML as the reference format to produce LPG data in the ETL pipeline as it allowed us to store labelled multigraphs in an extensible way (adding attributes or references to the edges themselves) and it is supported by many different graph databases and visualisation tools. In both KnetMiner and Semantic ETLs, an approach and software tool are needed to align RDFrepresented data with different LPG data. Initially, the rdf2neo framework was developed [13], which offered the feature to map RDF data models to the LPG model that is supported by the Neo4j database, and in a configurable way, which is decided by data managers and is the most appropriate for the datasets they deal with and their semantics. Later, similar needs have arisen in the Semantic ETLs scenario. This has naturally led us to extend the initial rdf2neo software into the rdf2pg infrastructure that we present hereby. As described later, we introduce the concept of mapping RDF to an abstract LPG model which allows for the execution of a specialised data generator. This same generator materialises the abstract property graph into a specific format or technology. By following this approach, we have repurposed a considerable amount of existing code for the development of an RDF- $\cdot >$ Neo4j and RDF>GraphML converter. Furthermore, the same framework remains extendable to accommodate similar or related use cases. # 2. Technologies and methods The semantic web. As well known, RDF is part of a stack of standard technologies defined under the vision of the Semantic Web [28–30]. Its main idea is to leverage the World Wide Web concepts and standards to share data in the same way we share humantargeting documents. As a result of this concept, the core of the RDF model (Figure 1, bottom) consists of binary relations between entities or about entities and data values, where the entities are described by means of resolvable URIs. URIs serve as both universal entity identifiers and, in most cases, they are web URLs that employ HTTP technology to provide documents of RDF statements about the identified entities they refer to. Over the years, a number of RDF-based formal languages (and standards) have been developed to characterise the data semantics with either lightweight schemas or advanced ontologies. As mentioned earlier, the RDF data model is very fine-grained: everything is a triple, including a property associated with an entity (such as name, surname, ‘protein description’). Together with the use of URIs, this makes the merge of data about the same entities very straightforward. At the same time, in the original triple-only model it is neither possible to isolate a set of properties for an entity (eg, two pairs of name+description for the same protein, coming from two different sources), nor to associate properties (or other entities) with a triple (eg, the text mining software name and the date of a ‘mentions’ relation between a document and a product). Both can be obtained by modelling patterns such as RDF reification [31], which is considered a difficult to use technique. Recently, the RDF-Star extension to RDF has been proposed [32], which allows one to use triples as subjects of other triples, making RDF more similar to LPGs bringing the advantages of the latter into the Semantic Web world. We are interested in the future adoption of this approach as a standard. Most graph database technologies have associated graph query languages. In KnetMiner, we utilise the Virtuoso triple store (a synonym for RDF-based graph database) to store our RDF data and make them publicly accessible via SPARQL, the standard query language for RDF, which is part of the Semantic Web stack [29,33]. SPARQL is essentially a graph pattern formalism for RDF, with a syntax that is a mix of the Turtle encoding format for RDF and SQL. Because the native form of the KnetMiner datasets is not based on RDF, we also employ the Jena framework, including the TDB triple store component, for software tools that convert such data into RDF [34,35]. TDB is particularly well-suited for programmatic access, while it is not the most performant SPARQL engine available, which is why we have opted for Virtuoso for the public endpoint. Labelled property graphs. As mentioned above, Labelled Property Graph models are less fine-grained than RDF (Figure 1, top), meaning they support the notion of nodes and relations between nodes, both of which may have attributes attached, in the form of name/value pairs. The attribute values are usually of plain data types (e.g., string, number). Both nodes and relations usually have special additional attributes, such as, using the common jargon, ‘labels’ for nodes and ‘types’ for relations, commonly used to characterise the represented entity type. The approximate equivalent in RDF is the standard property \`rdf:type\` and the predicate’s URI respectively, while many other models have similar concepts, e.g., class type in object-oriented languages. While in general, LPG implementations exist that support multiple relation types, here we will stick to the graph models used by Neo4j and the Gremlin framework, where only nodes can have multiple labels. As previously mentioned, RDF and LPGs are similar, though they have significant differences, especially in the details, and they have orthogonal technical advantages and disadvantages. Currently, our rdf2pg supports the conversion of RDF to two LPG targets, the Neo4j graph database and the GraphML format. The converter for the latter has been designed and developed with a focus on populating Gremlin-compatible graph databases (ie, to load GraphML files via Gremlin commands). Neo4j is one of the most popular graph databases built on the LPG model. It is known for its ease of deployment and maintenance, as well as its powerful query language, Cypher. Neo4j also offers a rich ecosystem of applications and tools designed to interact seamlessly with its data format, including a graph data science framework and database functions to ease graph embedding. GraphML [36,37] is an XML-based language to define generic graphs, supporting labelled (directed, undirected, mixed) multigraphs and hypergraphs in an extensible way. Unsurprisingly, the format has elements like $< g r a p h >$ , ${ < n o d e > }$ and ${ < e d g e > }$ . Moreover, it allows for the extension of the core attributes used in node and edges, by means of definitions like: <key $i d =$ “prefName” fo $r ~ = ~ ^ { \ast } n ($ ode” attr.name $\mathbf { \Sigma } = \mathbf { \Sigma }$ “Preferred Name” $a t t r . t y p e = { \mathrm { \Omega } } ^ { \ast } s t r i n g ^ { \prime \prime } / >$ . Attributes and ${ < } d a t a { > }$ elements provide the aforementioned flexibility. Additionally, GraphML is supported by many graph tools and graph databases. As presented on their project page, Apache TinkerPop [38] is a graph computing framework for both graph databases (OLTP) and graph analytic systems (OLAP). In essence, it is a unifying layer that offers an abstraction over the graphs and their processing, allowing different vendors to implement their specific implementations. Gremlin [10,39,40] is the graph traversal language employed by Apache TinkerPop. Its name stems from the metaphor of a gremlin hopping from graph element to graph element while performing calculations and gathering data. This is how traversals, a central concept for data exploration and manipulation, are described in Gremlin. More formally, a traversal is a sequence of steps. For instance, Figure 5 shows a traversal that walks the composition relationship between protein complexes and the proteins they’re made of. The query has basic traversal steps, plus operators similar to projection operators in other languages. In the KnetMiner use case, we have used ArcadeDB [41] to experiment with Gremlin queries on the plant biology data sets described later. ArcadeDB is a multi-model database [42], which is available as open source code, derived from the OrientDB database [43]. ArcadeDB supports, among other data models, graphs, document stores, relational/SQL databases. Although a recent and still progressing project, it is made fast by software development that exploits lower-level Java optimisations and avoids time and memory-consuming abstractions available in the standard Java language and libraries. Moreover, it is OS-portable and suitable for cloud software, such as Docker. We have preferred this database to experiment with Gremlin, due to both its good performance and its ease of installation and management. rdf2pg, architecture and approach. Figures 1 and 2, show the mapping approach rdf2pg is based on. An RDF graph can be seen as a data model where part of the triples are about LPG node’s plain properties and another part are about LPG relations, where one can map the latter from either straight RDF triples (i.e., which yield LPG relations with essentially any attribute) or from set of triples that correspond to the semantic of reified statements or similar entities. Such mappings, which are dataset or domain-specific, are defined by means of SPARQL queries. We selected SPARQL as the language for this mapping since it offers the significant advantage of not needing to learn anything new if one is already proficient in Semantic Web technologies. A mapping is defined by four types of SPARQL queries. Firstly, we have queries that select the URIs of those RDF nodes that are to be mapped to LPG nodes. Secondly, these URIs are used as parameters in RDF resource-centric queries that select/map node attributes (key/value properties and labels) from the RDF. Two other types of analogous queries are defined for the LPG relations: one query defines the relation identifying URI, its type and endpoints (as already mentioned, we support the common model where a relationship has one type only), and another URI-parameterised query allows for picking up the relation properties (again, a set of key/value pairs). The latter can be omitted when mapping plain RDF triples (in which case, the relation URI is a fictitious reference that can be built with appropriate SPARQL constructs, see [44] for an example). In Figure 3 we show how the mapping is enacted by the rdf2pg architecture. For the case of nodes, a SPARQL query that selects node URIs is submitted to an input Jena TDB triple store, then, returned node URIs are batched and each batch is processed in parallel by an instance of a node handler. The node handler has a base abstract class, which has the common logic to do two things: fetching the node attributes from RDF, using the corresponding queries and, for each node, producing an abstract representation of the node’s data (i.e., an abstract LPG view, see [45] for details). The abstract Node handler is then extended in the specific LPG data transformers, so that they can turn the abstract node representation into the representation that they need, e.g., Cypher CREATE instructions for the Neo4j converter or XML fragments for the GraphML converter. Conceptually similar components are available to produce LPG relations, that is, a relation processor that selects the relation URIs and types, and a relation handler, which produces LPG relations from a relation property selector defined in SPARQL. Clearly, a future extension to some other kind of similar RDF/LPG transformer would be based on this architecture and thus it would require its own implementation of the node handler and relation handler. As further mapping flexibility, the framework supports the definition of multiple query sets (sets of node and relation mapping queries), for those cases where a single RDF graph has subgraphs that are mapped differently to the LPG model. For instance, one might have to map gene and experiment representations, which are entities with enough differences (classes, node properties, relations) to warrant separated mappings. # 3. Results In this section, we evaluate rdf2pg as a tool to expose the conceptually equivalent data as multiple datasets, supporting multiple data access languages. This can be considered an expansion of a similar evaluation work initially done for the rdf2neo tool [13]. In particular, first, we will briefly review the three query languages that we have used and provide a qualitative analysis stemming from our experience and the lessons learned during the development of rdf2pg. Then, we present a quantitative evaluation of, on the one hand, the performance of rdf2pg in converting from RDF to three different graph databases, and on the other hand, the execution performance of semantically equivalent queries written in three different query languages, each against the rdf2pgpopulated databases. Overall, this work aims at a broad comparison of essentially equivalent datasets managed with different graph databases and graph query languages, where their semantic alignment is obtained through a tool like rdf2pg. # 3.1. Test datasets Details about our benchmarks are available at the github repository [46]. As explained there, we have used three datasets about plant biology. Fig. 1. An example (from [49]) of a Labelled Property Graph (top) and how it can be represented as RDF triples (bottom). All the datasets have similar entities and mostly follow the same schema, which allows for assessing the scalability of the same queries. Figure 4 shows elements of such schema that we considered to write the benchmark queries. Essentially, one aspect these elements describe is gene annotations, including Gene Ontology [47] annotations, gene mutations (SNPs) and associated mutation phenotypes, together with phenotype annotations from the Plant Trait Ontology [48]. Another aspect is biochemical processes (so-called biological pathways) in which gene products are involved. The starting point for the benchmark was RDF files that represent these data, from which we populated Virtuoso triple stores (providing SPARQL access) by direct RDF loading, Neo4j databases (providing Cypher) using rdf2neo, and ArcadeDB databases (providing Gremlin) using rdf2graphml and then the Gremlin helpers to import from GraphML. The result for each dataset is that the three different databases contain data that are aligned to the same conceptual model, therefore it was possible to design several query tasks, which could be translated into semantically equivalent queries in the tested query languages (see our benchmark report for details about the semantic equivalence). # 3.2. Qualitative considerations In this section, we compare the three tested graph query languages, SPARQL, Cypher and Gremlin from qualitative perspectives. This includes examining the syntax and patterns that the languages offer, identifying which queries are easy to write and which ones present challenges. We present our experience in this area by referring to the queries used for the benchmark; all mentioned queries are listed in the benchmark code repository [46]. Both SPARQL and Cypher are declarative languages and both allow one to work with graph patterns, that is, templates that describe subgraphs in the database to be matched and retrieved. This declarative approach is typically exploited by query engines for optimisation operations, such as query rewriting. a) b) Selection of node IRls a) b) Selectionof node properties (ie,attributes). Works similarly for relations (e.g.,see stato:score). SELECT?iri{ ?iri isan instantiated parameter. ?class rdfs:subClassOf\* skos:Concept. ?iri a ?class. SELECT?name?value{ } {#Annotations of interest listed explicitly ?iri?name_?value.VALUES(?name){ a)b) Selection of node labels. (agri:pmid) (dcterms:title)...} ?iriisanamed parameter }UNION{ #Annotations matching a criterion SELECT?label{ ?iri ?name ?value. ?iria ?label. ?name rdfs:subPropertyOf\* schema:Property} } c) Selection of relation IRls/types/endpoints, c) (cont.), selection of plain (property-less) triples reified relations SELECT ?iri ?type ?fromlri ?tolri{ SELECT ?iri ?type ?fromlri ?tolri{ #Relationsof interestlistedexplicitly ?iri a rdf:Statement; VALUES(?type){(dct:source)(sch:sourceOrganization)} rdf:predicate ?type; ?fromlri ?type ?tolri. rdf:subject ?fromlri; rdf:object ?tolri. #each relation isassigned an ?iri in Cypher, 1 # reified IRls isaway todefine them BIND(IRI(CONCAT( ex:,MD5(CONCAT(?type,?fromlri ?tolri ))) AS ?iri) SPARQL, being bound to the RDF model, shapes this graph pattern paradigm around the idea of triple patterns (Figure 5), a network of nodes like the one in Figure 4 translates to a list of triples in the pattern, where nodes participating in multiple triples are simply listed with the same binding variable names. An advantage of this is that the syntax is rather simple and operations like querying integrated data (where entities with the same URIs were automatically merged in the database) are straightforward. Typical disadvantages are that certain patterns are rather verbose, eg, matching chains of nodes, matching many properties for a node, and dealing with data modelling workarounds such as reification might be even more verbose. The joinRel query from our benchmark is an example of the latter. On the other hand, Cypher is a language oriented to property graphs and its syntax is often more ‘visual’, feeling like you can draw the subgraphs to be matched, in particular chain patterns. For a typical example, compare the Cypher version of the joinRel example to the SPARQL one. An exception to this is in cases like 2UnionNest, where graph patterns have to be built that involve many branches from hub nodes. With past versions of Neo4j and Cypher, this was particularly hard to write, requiring to ‘flow’ partial results from one subquery to another (using the WITH and UNFOLD clauses), which is very unusual with respect to the more common UNION construct. Indeed, Cypher improved this construct recently (in version 4.0) and now such queries are easier to write, as we show in 2union1Nest+. Fig. 3. : the rdf2pg architecture (from [50]). The diagram shows the components used to collect RDF batches of node references via SPARQL; the batches are then passed to parallel node handlers, which retrieve node details like properties (using additional SPARQL) and convert inmemory representations of LPG elements into a specific LPG target. Similar components are available to process relationships. The Gremlin language contrasts with SPARQL and Cypher for giving the impression of a procedural language. In fact, writing a query looks similar to specifying the states/nodes and transitions/relations of a state machine. Common queries such as simple selections (see the ‘selection’ group in our benchmark) are as easy to write in Gremlin as in the other languages, and Gremlin is fairly simple when dealing with traversal patterns. Furthermore, the language is integrated with other programming languages like Groovy, which makes it a Turing-complete programming language [39], useful for advanced graph exploring tasks. For instance, our joinReif query defines a function to match a uri property in a relationship to a node (property graphs have data properties, but don’t allow for links to other nodes or relationships). At the same time, this power often backfires. For instance, hub-based patterns (where many subgraphs depart from or join a node) are often hard to define in Gremlin efficiently, especially when the traversal paradigm makes it hard to link separated subgraphs through joining properties. Indeed, we needed to write the joinReif query using a lambda function, in order to make the uri-based join efficient. Fig. 4. typical elements of the plant biology datasets that we have used for benchmarking three different graph databases and query languages, with data coming from the same RDF sources and kept aligned using rdf2pg tools. Details in our dedicated code repository [46]. Related to this nature of mostly imperative query language, Gremlin leaves the query writer in control of how the graph database is explored, which can be very flexible, but at the same time, can force the query author to pay much more attention to query optimisation, including cases where such optimisation would be automatically computed by more declarative languages. We have experienced this in queries like 2union, where traversing the graph in the order enzyme-to-protein is faster than protein-to-enzyme, due to the cardinality difference for the two types. For similar reasons, we have experienced significant challenges in writing aggregations using Gremlin. For example, existAg pinpoints biological pathways with certain characteristics and then computes both a reaction-per-pathway count and an average of proteins per reaction. This is a long query to write in SPARQL and Cypher, which becomes even longer and more convoluted in Gremlin, requiring a first part of traversals, which is combined with aggregating traversals starting from the elements projected from the first part. Another comparison we made concerns queries that traverse graph chains, ie, graph patterns like: x-to-y-to-z… (see the ‘paths’ category in our benchmark). The Cypher syntax is very compact and easy with this kind queries, and it is manageable to write them in SPARQL and Gremlin, although with more verbose patterns. These queries become more complicated when a traversal pattern has variable-length relations (eg, a pattern that matches both protein1/xref/protein2 and prot1/xref/prot2/xref/prot3). Both SPARQL and Cypher support this in their syntax, while Gremlin requires care with writing repeat() traversing steps (see our ‘paths’ queries and section 3.27 in [40]). Things become even more complicated for traversal paths with ‘optional tails’, eg, see the lngSmf query in the benchmark, where the chain tail consisting of protein/protein/gene is made optional. The Cypher syntax for defining such a case is still straightforward, since a chain tail having optional relations only (with min cardinality set to 0) is implicitly considered optional in its entirety, that is, the pattern doesn’t need to match the nodes linked by such relations. Both SPARQL and Gremlin require an OPTIONAL clause for achieving the same, since node matching in these languages is not optional in this case. # 3.3. Performance benchmark In this section, we show the results from a quantitative benchmark that we have run over the three chosen databases and languages, measuring the times taken to populate the graph databases, as well as the average times taken by the test queries to complete in multiple executions. For these tests, we have used: Virtuoso, Open Source edition, version 7.2.10-r16, Neo4j, Community edition version 5.11.0, ArcadeDB: version 23.4.1. We have used the rdf2pg framework and tools version 5.0. We have run these systems on a virtual machine equipped with: CPU, Intel(R) Xeon(R) Gold 6152, 2.10GHz, 8 cores, 32Gb RAM assigned to each DB server, 32GB assigned to rdf2pg tools. We have employed these settings to test three different datasets (referring to similar data and sharing a very similar schema) of increasing sizes of about 2, 21 and 97 million of RDF triples. Results and more details about the test settings and approach are described in the alreadymentioned github repository [46]. Database population performance. The times taken to load the three test datasets into the three test databases are shown in Figure 6. This shows that even the largest dataset can be uploaded to any of the databases in a reasonable time (with a range between about 14 seconds to 30 minutes), at least when considering the case where a large dataset does not need frequent updates (eg, it is uploaded once a week or less often). We have also experienced that all the databases scale the uploading time linearly with the data size. In addition to the loading times, we have verified expected behaviours that depend on the details of rdf2pg. For example, the Neo4j population is influenced by the fact that the database is written by the rdf2neo tool, which has the overhead to read the input via SPARQL. In contrast, Virtuoso is the fastest database in the loading task, since it just needs to read RDF data and has optimised support for that. Though not within the scope of this paper, these figures could be improved by several optimisations. For example, the ArcadeDB population (and in general, any Gremlin-based writing) could be realised in a single step, where data are read from SPARQL and streamed to the target Gremlin database, similarly to the way the rdf2neo tool works. Adopting the performance-optimised RDF format HDT might be another improvement [51]. All of these hereby results are in line with our previous work [13]. Query performance. As explained in the benchmark report [46], we have designed 5 graph query categories and a total of 25 queries, based on both real use cases and other benchmark works, such as the Berlin benchmark [52]. As explained above, for each query, we wrote semantically aligned versions in the three tested languages. Results are shown in Figure 7. As shown, all the databases/languages perform within the order of hundreds of ms for most queries (each finding and fetching the first 100 records on average). In particular, Neo4j is the fastest database in many cases. Results also show that Virtuoso is still a good choice for pure-RDF and pureSPARQL applications, although this triple store has the limitation of not supporting the above-mentioned RDF-star. Gremlin on top of ArcadeDB is often the slowest endpoint, this might depend on the fact that ArcadeDB is a relatively new product and its developers are still actively improving it. Moreover, Gremlin is usually implemented starting from a common code base and that might affect its performance compared to languages which are more native to a given database and its query engine. Another factor to consider is that, as said above, Gremlin is more sensitive to how queries are written. For instance, for the join and joinRel queries, we have noticed that the traversal step where returned results are limited to the first 100 matches matters for performance significantly, since exploring only the first 100 short chains in a graph pattern avoids that the engine traverses many more subgraphs and then cut the results to be returned at the end (it also might change the query semantics and results, see our github report for details. This has also an impact on the scalability towards the database size, in fact, while both Virtuoso and Neo4j show good scalability, this is more problematic with ArcadeDB. Considering specific queries, as expected, the fastest, most homogeneous and most scalable queries were selections/projections from simple patterns, while the aggregations were among the most challenging queries. This is in line with existing literature [13,52,53]: basic matching and projection are among the most used features in most query languages, while aggregations are notoriously hard to compute. a) SPARQL PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#> b) Cypher PREFIXbk:<http://knetminer.org/data/rdf/terms/biokno/> Tpr $\ - >$ (.cvx:Protcmplx) SELECT?pname ?cname ?evidence WHERE{ ?p a bk:Protein; bk:prefName ?pname. ?rel a rdf:Statement; rdf:subject ?p; rdf:object?cpx c) Gremlin rdf:predicate bk:is_part_of. g.V().hasLabel('Protein').as('p') ?cpx a bk:Protcmplx; .outE0.hasLabel('is_part_of').as('rel') bk:prefName ?cname. .outV0.hasLabel('Protcmplx').as('cpx') ?relardf-stetement; byc reameyer) rdf:object ?cpx .by('evidence') rdf:predicate bk:is_part_of. ?rel bk:evidence ?evidence. } The semantic motif paths queries (i.e., a kind of chain pattern queries) are a particularly interesting category for the KnetMiner use cases, since in KnetMiner we often follow path chains to find entities associated with genes. As expected, Neo4j and Cypher excel in this kind of query, which is in line with their authors claiming their database engine is optimised for traversals. We were surprised these queries can be challenging for Gremlin, after further investigation, we noticed graph patterns with variable length relations (eg, find protein pairs linked by a chain of ‘xref’ relations of max length $= 3$ ) can become quite slow with large datasets. Again, this is likely to depend on the way the query is written and the fact Gremlin traversals are hard to optimise automatically. Finally, the queries in the ‘counts’ category have very varying performance across different databases and we presume this depends on the fact systems like Neo4j store summary data like the total number of nodes or relations, while other engines run the corresponding queries every time that such summaries are asked. # 3.4. Theoretical considerations It is useful to add theoretical analysis of the algorithms used by rdf2pg. In previous work (supplemental material in [13]), we have shown how the SPARQL-based mapping from RDF graphs to LPG graphs can be formalised by means of abstract algebra and we proved that the Cypher queries we generate in rdf2neo correspond to the mapped LPG. Similar reasoning can be done for the GraphML conversion, that is, it is possible to formally define how an LPG maps to the XML elements of the GraphML format and use that to prove that the LPG we build from SPARQL queries is correctly converted into GraphML. Namely, the proof is analogous to the proof of Theorem 1 in our mentioned work, which combines definitions 4 (RDF/LPG mapping), 5 (transformation induced by a mapping) and the GraphML semantics. Furthermore, we have shown that the computational complexity of rdf2neo is dominated by the SPARQL mapping queries and it is PSPACE in the worst case, with a significant class of queries that can be reduced to LOGSPACE. Since the conversion from the LPG entities extracted from these queries to Fig. 6. Results from graph database population with varying size datasets. More details at [46]. GraphML is linear with respect to the selected LPG nodes and relationships, this computational complexity holds for the GraphML converter too. In general, this computational complexity analysis is valid for any converter implemented with our framework, as long as the target-specific conversion has the same linearity (or does not exceed LOGSPACE/PSPACE). It is also worth considering more recent theoretical work concerning the properties of RDF/LPG mapping algorithms. Using the terminology introduced by [54], the SPARQL-based transformation we define by means of SPARQL (Definition 4 in [54]) is a kind of database mapping (from RDF to LPG), and such transformation uses SPARQL to also induce an LPG schema, consisting of all the labels, relation types, property domain and ranges that we implicitly extract from RDF. Such transformation/mapping is certainly computable (our tool computes it!) and it is semantics-preserving by construction, that is, the translated LPG is a valid instance of the schema constructed from SPARQL selections. According to the same mentioned paper by Angles et al., the information preservation of our transformation/mapping consists of the possibility of reconstructing the original RDF data from the translated property graph. In general, this is not a property of our RDF/PG transformations, eg, we might have instances of the classes ‘Car’ and ‘Van’ on the RDF side and, for some reason, one might want to define a mapping where all car and van nodes are assigned the ‘Vehicle’ label. Thus, in such a case, the original RDF data would be impossible to reconstruct (ie, no inverse PG/RDF mapping could exist). That said, we might show a set of conditions sufficient to make our transformation information-preserving. For instance, if the LPG node IRIs are always selected from a pattern like: ?iri rdf:type ex:Car, and the label that is selected in this case is always ‘Car’, then, assuming no other selector maps to the ‘Car’ label, all the rdf:type relations can be reconstructed correctly from the LPG labels. Similar conditions on relationships and node/relation properties could be defined to ensure this information-preserving property of our tool transformations. Note that, generally speaking, our SPARQL selectors consider a subset of the input RDF graph, hence this property of information preservation, that is, the possibility of going from the LPG data back to the RDF data the LPG was derived from, can only hold for such actually converted subset and not for the possibly bigger entire RDF graph one has started from. This corresponds to the real use of rdf2pg, where in most cases, the reversibility of RDF/LPG mapping is only interesting for the subset of data that are actually converted in either direction and moreover, such reversibility is not always a desired and sought-for property (eg, cars and vans merged into the vehicle class is a simplification where, likely, there is not interest in reconstructing the original RDF details). # 4. Discussion We have shown that labelled graph databases and their query languages have advantages and disadvantages that are complementary to more traditional Semantic Web technologies and Linked Data practices. The latter allows for managing dataset building and data sharing in a way that complies with the FAIR principles. In particular, RDF and existing ontologies or schemas based on RDF are still important means to ensure the goal of data interoperability. They also still play an important role in building pipelines to integrate heterogeneous data into unified knowledge graphs (i.e., ETL or ELT pipelines [55,56]). For instance, features like reusable URIs and standard schemas and ontologies produce graphs of data that are integrated in a seamless way. Fig. 7, Part 1. Results for benchmarking three various-size datasets on the three target graph query languages and graph databases, using queries of different categories. See [46] for a detailed list of all the tested queries, their description and code. Fig. 7. Part 2. On the other hand, users who are not proficient with SPARQL and the Semantic Web might prefer to query knowledge bases by means of languages like Cypher. We have shown that the performance of graph databases and triple stores are not extremely different, which allows for scenarios where data are first prepared using mainly RDF and related technologies and then loaded into a LPG database. Furthermore, our experience with KnetMiner proves that more complex architectures are feasible too, where, for instance, the same data are served via both SPARQL and Cypher, we could easily add Gremlin support and, at the same time, all the access points and encoding formats are aligned to the same conceptual data model. All of this is possible by means of the rdf2pg framework, which is both a base library to build RDF to property graph converters and two specific converters based on the framework. Our experience and our tests also show that, while query languages like SPARQL and Cypher have similar expressivity, Gremlin feels more like a lower level of abstraction and more suitable as a standard to build applications like multi-language or multi-model graph databases [42], or for use cases where advanced graph traversals are necessary. # 4.1. Related work As described in previous sections, this paper is an extension of our previous work on aligning RDF and Neo4j-based datasets, done within the context of the KnetMiner platform, where we have an interest in giving multiple access means to knowledge graphs [9]. Based on this initial work, we have seen the rdf2neo approach suitable for the generalisation and extensions described hereby and to be used to manage the enterprise ETL use case described above, where we have similar needs to align mixed data models and technologies. In developing our framework, we have relied on literature comparing the two types of graph paradigms. For instance, [57] discusses various definitions of knowledge graphs and their applications, [58] is a comprehensive review of knowledge graphs, how they are built and their applications, including link extraction from existing graphs. The above-mentioned work [54] gives formal definitions of graph databases and shows the kind of mappings that are possible between them. Work has been done to standardise property graph representations [59], convert between them automatically [14,60], or map query languages on multiple paradigms [61,62]. These conversion approaches are usually based on fixed mappings between the RDF and the LPG data model. For instance, all rdf:type relationships are converted into a node with a given label, all datatype triples become node properties and all node-to-node triples are turned into relationships. This makes the RDF-to-LPG conversion simple, since one does not have to design how to map their RDF model onto the corresponding LPG, which, for example, allows the Neosemantics tool for automatic back-conversion from Neo4j to RDF. However, a drawback of this pre-defined mapping is that it converts RDF graphs in a flat way, mostly ignoring the better expressivity of the LPG model. In particular, if a set of triples reifies a relationship with properties, they are mapped one-to-one on the LPG side, rather than producing the corresponding single relationship. In contrast, a major goal of our work is allowing for the definition of how RDF graph patterns should be translated onto LPG structures, especially in cases like reification. Although fixed mapping approaches avoid the overhead of defining custom mappings, our approach is more flexible and can address the cases where more natural (for the LPG model) mappings are desired. Moreover, we allow for using SPARQL as the language to define the mappings, since it does not require the users to learn any new special syntax, contrary to other approaches, such as [63]. Clearly, that is an advantage for Semantic Web experts, while it requires to learn at least the basics of RDF and SPARQL to other kind of data practitioners. Another limit of our approach is that it is not bi-directional, i.e., there is not an easy way to take a SPARQLbased mapping in the RDF-to-LPG direction and automatically compute the opposite LPG-to-RDF. This out of the scope of rdf2pg framework and possibly, it would need to be addressed with more declarative mapping languages (similarly to R2RML [64]). As mentioned, part of the queries we have used in the presented benchmark tests are inspired by the wellknown Berlin benchmarks [52], a seminal work on RDF and SPARQL performance. Various other works on performance testing of relational and NoSQL databases exist, graph databases in particular. For example, [53] compared Neo4j and the relational database PostgreSQL, while [65] compared Cypher/Neo4, a Gremlin implementation on top of a Neo4j server, and a JPA object/relational mapping based on a MySQL database. In both cases, they found results similar to ours, that is, similar performance across the different storage and data access systems, with variability depending on the query types and use cases. Regarding the qualitative analysis of graph query languages, a recent study [66] surveyed different users of SPARQL and Cypher, concluding that they find them more similar than different. Interestingly, this contrasts with the analysis we have presented here, which highlights that user background and expertise significantly influence their perception of Cypher, SPARQL, and their respective data models. Additionally, we identify expressivity differences in these languages that impact the ease of writing specific query types, such as multiple graph pattern hubs and long graph chain patterns, factors the cited authors did not consider in their analysis. The Gremlin and TinkerPop project started in 2009 as an Apache Foundation project, based on existing database graphs and models [39]. The idea of graph traversals stems mainly from work in the area of network analysis and graph processing algorithms [67]. To the best of our knowledge, our rdf2pg is the first that allows for customised mapping from RDF schemas to Gremlin-compatible databases and the first that compares the use of Gremlin and its performance to other LPG languages and their implementations.
Linked Data and labelled property graphs (LPG) are two data management approaches with complementary strengths and weaknesses, making their integration beneficial for sharing datasets and supporting software ecosystems. In this paper, we introduce rdf2pg, an extensible framework for mapping RDF data to semantically equivalent LPG formats and data-bases. Utilising this framework, we perform a comparative analysis of three popular graph databases - Virtuoso, Neo4j, and ArcadeDB - and the well-known graph query languages SPARQL, Cypher, and Gremlin. Our qualitative and quantitative as-sessments underline the strengths and limitations of these graph database technologies. Additionally, we highlight the potential of rdf2pg as a versatile tool for enabling polyglot access to knowledge graphs, aligning with established standards of Linked Data and the Semantic Web.
[ "cs.DB", "cs.AI" ]
# I. INTRODUCTION # A. Motivation O tRioGnAspNaIcZeINcaGnrpersovuirdce sainneaffimciuelntitdriemseonusricoenamlacnlagsseimfiecanmechanism for users or application systems to efficiently operate a large set of resources from different dimensions. [1]. The Resource Space Model is a normalized space that classifies resources from multiple dimensions, each of which is represented as a tree of classes (called coordinates at dimension) [2]. In a resource space, resources are classified into points according to coordinates at each dimension so that Xiaoping Sun is with the Key Lab of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China(e-mail: sunxiaoping@ict.ac.cn). Hai Zhuge is with Great Bay University and Great Bay Institute for Advanced Study, Dongguan, Guangdong, China (e-mail: zhuge@gbu.edu.cn). This work was supported by National Science Foundation of China (project no. 61876048). each point holds a set of resources that can be located by the coordinates of points. Partial order relations such as subclass relation and inclusion relation can be defined on coordinates of dimension to form a coordinate tree and a range can be defined on the partial order relations. Partial order on points in the space is induced by their coordinates at the dimensions. Resources located at a sub-coordinate are included in its parent coordinate at the coordinate tree. A subspace query in a resource space can be defined to explore resources by giving ranges at each dimension. A subspace query with aggregation can be further defined to obtain non-empty points within a subspace where each point contains resources aggregated from its descendant points within the subspace along partial order relations on coordinate tree. Resources aggregated at points can be used for further ranking and selecting resources at each point, which is necessary and important to analyze resources at points in the space. # B. An Example of Subspace Query A resource space of papers RS(topic, date) consists of a topic dimension representing class (or coordinate) trees on areas and a date dimension representing inclusion relation on the dates of publishing papers, which take the following forms (the detailed definition of the space structure can be represented in XML): topic(CS(AI(NLP, ML, ...), DB(Model, Index, ...), ...), ...), where CS (i.e., Computer Science) is a coordinate at the topic dimension, AI (i.e., Artificial Intelligence), DB (i.e., Database) are two sub-coordinates of CS, NLP (i.e., Natural Language Processing) and ML (i.e., Machine Learning) are two subcoordinate of AI, and Model and Index are sub-coordinate of DB. • date=(..., 2020(Spring(1, 2, 3), Summer(4, 5, 6), Autumn(7, 8, 9), Winter(10, 11, 12)), 2021(Spring(1, 2, 3), Summer(4, 5, 6), Autumn(4, 5, 6), Winter(4, 5, 6)), ...). A coordinate is a subclass (representing a subset of resources) of its parent coordinate and identified by a path from the root to the coordinate, for example, topic $\ C S / D B$ represents a coordinate database (DB) under its parent coordinate CS at the topic dimension. A full order relation at the same level of date dimension can be defined, e.g., the order date $\phantom { - } 2 0 2 0 <$ date/2021 at year coordinate level and the order date $\mathrm { ' 2 0 2 0 / 0 1 / 0 1 } \le d a t e / 2 0 2 0 / 0 1 / 0 2$ at month coordinate level. A point $< \mathrm { \ t o p i c / C S / D B }$ , date/ $2 0 2 0 >$ locates a set of papers from coordinate topic/ $\mathbf { \chi } _ { C S / D B }$ at the topic dimension and the coordinate date/2020 at date dimension. As inclusion relations are transitive, a partial order relation $\subset$ between points can be induced by the structure of dimension, e.g., $\langle t o p i c / C S / D B / I N D E X , d a t e / 2 0 2 0 \rangle$ C topic/CS/DB, date/2020 because topic/CS/DB/INDEX is a sub-coordinate of topic/ ${ ' C S / D B }$ . $\langle t o p i c / C S / D B / I N D E X , d a t e / 2 0 2 0 / 0 1 \rangle$ C $\langle t o p i c / C S / D B / I N D E X , d a t e / 2 0 2 0 \rangle$ because date/2020 includes date/2020/01. A query requirement on this resource space is as follows: Select top-10 most cited papers from each of top-10 most cited subtopics in database area from year 2020 to 2024. This query is common in literature survey, but it is time consuming to collect, rank and select resources according to the partial order relation on the coordinate tree of different dimensions. This query requirement can be meet by a subspace query with aggregation operations according to the partial order relations. The implementation of the query needs to define aggregation operators on the range within the topic dimension and the range within the date dimension. The aggregation operator at the topic dimension selects papers from all sub-coordinates within the range at the topic dimension and appends them to their parent coordinates. As there can be a large number of points with papers within the subspace, a top-k ranking operator needs to be defined for ranking papers in point according to the paper citation counts. Thus, the query can be composed by four operators: select, top_resource, top_point and subspace as follows: SELECT topic, paper_title, paper_citation_count FROM top_resource( $\mathrm { : \tt c o p k = 1 0 }$ , measure $\ c =$ paper_citation_count) FROM top_point( $\mathrm { : \tt c o p k = 1 0 }$ , measure $\ L =$ point_citation_count) FROM subspace([dimension $\ O =$ topic, range $\ c =$ [none, topic/CS/DB], rel $\ c =$ subclass, agg $\mathbf { \bar { \rho } } = \mathbf { \rho }$ TRUE, point_citation_count $\mathbf { \Sigma } = \mathbf { \Sigma }$ SUM(paper_citation_count) ], [dimension $\mathbf { \tau } = \mathbf { \tau }$ date, range $\ c =$ [date/2020, date/2024], rel $\ c =$ year, agg $\mathbf { \Psi } = \mathbf { \Psi }$ FALSE, ]) FROM RS; The subspace operator specifies an aggregation within a range [none, topic/CS/DB] at the topic dimension with none indicating the lower bound of the range is omitted, parameter rel=subcalss represents that the partial order relation of subclass is used within the range at the topic dimension while rel=year represents that the year order is used within the range at the date dimension. The parameter $a g g \mathop { = } { \cal T } R U E$ represents that aggregation operator is applied to the range defined at the topic dimension. The paper_citation_count is the attribute recoding the citation count of a paper. The paper_title records the paper’s title. Each point has a variable paper_citation_acount that is equal to the summation of citations of papers within a point, which is calculated at each point of the subspace after aggregation operation. The operator $t o p _ { p } o i n t ( t o p k \ = \ 1 0 , m e a s u r e \ =$ point_citation_count) selects points with top-10 total citations after aggregation. The operator $t o p _ { r } e s o u r c e ( t o p k \ =$ $1 0 , m e a s u r e = p a p e r \_ c i t a t i o n \_ c o u n t )$ selects top-10 cited papers from each of top-10 points according to the attribute paper_citation_count. In the selection operator, topic, paper_title and paper_citationcount indicate that the topic coordinate of each point of the result are listed, together with papers in each point by showing the title and the citation count of each paper. As resources are aggregated at different levels of topics, not only can quantitative measures on resources be calculated but also ranking and selections of resources can be done in points at different levels. In comparison, it is hard to automatically implement this task by SQL and OLAP queries as their GROUP BY and ROLL UP operators can aggregate quantitative measures, but they cannot aggregate resources in partial order on coordinates at dimensions as their aggregation is defined on dimensions rather than on partial order relations on coordinate trees at dimensions. # C. Necessity of Building Index for Efficient Query Implementing such a query faces a performance challenge. A subspace query can be modeled as an intersection of two sets of resources that have coordinates matching the range of each dimension of the query. In the example given in section 1.2, there are three coordinates under topic/CS/DB: topic/CS/DB/INDEX, topic/CS/DB/MODEL and topic/CS/DB/RL at the topic dimension, and 60 coordinates from 2020 to 2024 at the date dimension. Thus, it needs to check each point within the production of coordinates at the two dimensions, including $\langle t o p i c / C S / D B , d a t e / 2 0 2 0 \rangle$ , topic/ $C S / D B / I N D E X$ , date/2020 , top $i c / C S / D B / M O D E L , d a t e / 2 0 2 0 \rangle$ , $\langle t o p i c / C S / D B / R L , d a t e / 2 0 2 0 \rangle$ , ⟨top $\dot { c } / C S / D B , d a t e / 2 0 2 0 / 0 1 \rangle$ , $\langle t o p i c / C S / D B / I N D E X , d a t e / 2 0 2 0 / 0 1 \rangle ,$ ..., etc. There are 240 points to be checked. If there are more subtopics under topic/CS/DB, then the search space will be much larger. If a query includes more dimensions such as organization, publisher and regions of papers, the searching space increases exponentially with respect to the number of dimensions. A simple solution is to scan all coordinates at all dimensions and make intersections on each possible production of coordinates at different dimensions, which can involve large number of empty points in the space. It is necessary to design an index that can handle partial order relations on coordinates to reduce costs on scanning empty points and calculating empty intersections. Indexing techniques are major tools for improving querying in a multidimensional space. The coordinate tree at a dimension can be deemed as an index of inclusion relations with bounded depth at the dimension. Following the index of coordinate tree, resources with matched coordinates can be located. However, the following two issues need to be handled to build an index with inclusion relations. (1) The width of a coordinate tree is unbounded, which makes the width of an index also unbounded. Classical index can be built in a multidimensional space with linear coordinates and metric distance so that subspace can be modeled as a hyper cube within the space. An index such as R-Tree or KD tree can be built to help locate the data within the subspace without scanning space outside the range at each dimension [3]. Classical multidimensional indexes use space partition or data partition to build an index in a recursive way such that a lower indexing node with constant number of child indexing nodes covering a smaller subspace and following indexing links can help avoid visiting ranges outside the querying subspace. The constant number of child links of each indexing node makes an index efficient when the depth of the index is bounded in logarithmic scale. However, this strategy cannot be directly applied in the resource space with partial order relations on coordinates of dimensions because direct child nodes of a coordinate have no order relations, e.g., topic/CS/DB can have ten or more direct child coordinates, and it is impossible to use constant number of child links to represent all these child links based on inclusion relations among coordinates. Thus, even one can build two child links for topic/CS/DB, it cannot avoid comparisons of all direct subtopics of topic/CS/DB to determine which one should be used as the next step of querying processing. Building an index with a constant number of child links for each indexing node is impossible. (2) When combining with multiple dimensions, the nonempty points within intersections of coordinates at different dimensions increases exponentially, which makes an index even much wider if non-empty intersections are indexed. The cost of calculating the intersections of resources at coordinates from multiple dimensions is high as it may involve all resources of the space. Building indexing nodes of nonempty intersections can help reduce the cost of calculating intersection by following indexing links directly to those resources within intersections. The depth of an indexing tree with inclusion relations is still bounded when indexing nodes on points of non-empty intersections are added. However, it is impossible to build indexing links for all non-empty points. It needs to select those indexing nodes in an effective way to make the index with a bounded width. An index with a bounded depth has a bounded width if the number of total indexing nodes is bounded. However, it is an NP-hard problem to build such an index with bounded number of indexing nodes and optimal querying costs. Moreover, in a multidimensional space with only partial order relations among points, it is impossible to build an index with each node having a constant number of child links. Classical graph index [4] defined on a metric distance in the space cannot be directly applied to the space with only partial order relations. # D. Contribution The contributions of this paper are summarized as following: (1) A subspace aggregate query language is designed to represent resource queries with aggregation operations in a subspace defined on partial order relations at different dimensions of resource space. (2) An approach to generating graph index based on partial order relations among points is proposed to support efficient subspace query in resource space with the following characteristics: (a) the index reduces the cost for calculating intersection during query process with a limited number of index links to control the number of index nodes and improve query efficiency based on probability distribution of costs of calculating intersection; (b) the index node splitting strategy make index nodes hold balanced number of resources so that the cost of calculating intersection can be further reduced; and (c) the index uses short-cut links between nodes on the coordinate tree to let the query follow the full order relation so that the performance of locating subspace can be improved. # II. SUBSPACE QUERY WITH AGGREGATION IN RESOURCE SPACE # A. Resource Space A Resource Space (in short $R S$ ) is a space consisting of $\mathfrak { n }$ abstraction dimension $X _ { 1 } , . . . , X _ { n }$ [2], represented as $R S ( X _ { 1 } , . . . , X _ { n } )$ , where each dimension consists of coordinates represented as $X i \ = \ c _ { i } 1 , c _ { i } 2 , . . . , c _ { i } k$ . Resources in resource space are identified by points, each point has a projection (a coordinate) at each dimension. A point identifies a set of resources. A partial order $\mathsf { C } _ { X _ { i } }$ can be defined at any dimension $X _ { i }$ (with root $T _ { i }$ ), therefore a range can be defined according to the order. The partial order can be subclass relation, inclusion relation and comparison order relation. A subspace of a resource space can be determined by giving a range $S _ { i } = [ a _ { i } , b _ { i } ] _ { \subset { \cal X } _ { i } }$ at each dimension $X _ { i }$ according to the partial relation $\subset { \boldsymbol { X } } _ { i }$ ( $\left( { a _ { i } } \right)$ and $b _ { i } \in X _ { i }$ s.t. $a _ { i } \subset _ { X _ { i } } b _ { i } $ ). # B. Subspace Query with Aggregation in Resource Space A subspace query with aggregation operator obtains points with resources aggregated from low-level points on the partial orders defined at the dimensions of the subspace. An aggregation operator defined on a range $S _ { i } = [ a _ { i } , b _ { i } ] _ { X _ { i } }$ at dimension $X _ { i }$ is to get all points within the range. A subspace query with aggregation defined on ranges $S _ { 1 }$ , $S _ { 2 }$ , ..., and $S _ { k }$ at $k$ dimensions is formally represented as follows: $R ( R S ( a g g ( S _ { 1 } ) , . . . , a g g ( S _ { k } ) , S _ { k + 1 } , . . . , S _ { n } ) )$ $\ O =$ $\{ \langle p , R ( p ) \rangle |$ for all $\begin{array} { r l r l } { p } & { { } \ = \ } & { \left. c _ { 1 } , . . . , c _ { n } \right. } \end{array}$ satisfying : $\begin{array} { r l } { p } & { { } = } \end{array}$ $R S ( S _ { 1 } , . . . , S _ { n } )$ , and $R ( p ) = \bigcup _ { s } R ( s ) \neq \emptyset$ for all $s \in$ $S _ { i } { \mathrm { s . t . } } s \subset _ { X _ { i } } { \mathit { p f o r i } } = 1 , . . . , k \}$ . where $k \in [ 1 , n ]$ , $s \subset \ d { \ d { X } } _ { i } \ d { \ d { P } }$ if $s _ { i } ~ \subset _ { X _ { i } } ~ c _ { i }$ for two points $p \ = < \ c _ { 1 } , . . . , c _ { i } . . . , c _ { n } \ >$ and $s = < s _ { 1 } , . . . , s _ { i } . . . , s _ { n } >$ , where $R ( p )$ is a set containing resources aggregated at $p$ from descendant point $s$ of $p$ , $\langle p , R ( p ) \rangle$ is the output tuple containing the point coordinate $\boldsymbol { \mathrm { ~ p ~ } }$ and resources aggregated at $p$ , and $a g g ( S )$ is an aggregation operator that returns all coordinates specified by the whole structure of $S$ . A resource in resource space has an identity, a set of attributes and content located by coordinates. The query can get the resources aggregated from descendant points within the subspace along the partial order relation on coordinates of different dimensions. The following are three examples: (1) Query $R ( R S ( a g g ( S _ { t o p i c }$ = $[ t o p i c / C S / D B / I N D E X , t o p i c / C S / D B / D B ] _ { \subset t o p i c } ) , S _ { d } a t e = \underline { { { \mathrm { q u e r y } } } }$ $[ d a t e / 2 0 2 0 , d a t e / 2 0 2 1 ] _ { \leq y e a r } ]$ specifies an aggregation operator at the topic dimension, where $\begin{array} { r l } { S _ { t o p i c } } & { { } = } \end{array}$ $[ t o p i c / C S / D B / I N D E X , t o p i c / C S / D B ] _ { \subset t o p i c }$ is a region defined on the subclass relation $\subset _ { t o p i c }$ at topic dimension, and $S _ { d } a t e ~ = ~ [ d a t e / 2 0 2 0 , d a t e / 2 0 2 1 ] _ { < y e a r }$ is a region defined on temporal order relation $< y e a r$ at the date dimension. $R S ( S _ { t } o p i c , S _ { y } e a r )$ is the production of the two coordinate set $\{ t o p i c / { \dot { C } } S / D B / I N D E X , t o p i c / C S / D B \}$ and $\{ d a t e / 2 0 2 0 , d a t e / 2 0 2 1 \}$ so it contains four points $p _ { 1 }$ $\mathop { = } <$ topic $\langle C S / D B / I N D E X , d a t e / 2 0 2 0$ $>$ , $p _ { 2 }$ $= <$ topic $\cdot / C S / D B / I N D E X , d a t e / 2 0 2 1$ $>$ , $\begin{array} { r l r } { p _ { 3 } } & { { } = < } & { t o p i c / C S / D B , d a t e / 2 0 2 0 \mathrm { ~ \Omega ~ > ~ } } \end{array}$ and $\begin{array} { r l } { p _ { 4 } } & { { } = < } \end{array}$ topic/ $' C S / D B , d a t e / 2 0 2 1 >$ . As agg is defined at the topic dimension, papers in $p _ { 1 }$ and $p _ { 2 }$ will be aggregated at $p _ { 3 }$ and $p _ { 4 }$ respectively. (2) Query $R ( R S ( a g g ( S _ { t o p i c }$ $\ c =$ [topic $\prime C S / D B / I N D E X , t o p i c / C S / D B ] _ { \subset t o p i c } )$ , $a g g ( S _ { d a t e } =$ $[ d a t e / 2 0 2 0 , d a t e / 2 0 2 1 ] _ { \langle y e a r } ) )$ will get papers aggregated along the paths at the two dimensions. (3) Query $R ( R S ( a g g ( S _ { t o p i c }$ $\ c =$ [top $i c / C S / D B / I N D E X , t o p i c / C S / D B ] _ { \subset _ { t o p i c } } ) , a g g ( S _ { d a t e } = \mathbb { P }$ $[ n o n e , d a t e / 2 0 2 1 ] _ { \hookrightarrow d a t e } )$ ) aggregates papers at both topic dimension and date dimension, where the range $S _ { d a t e } ~ = ~ [ n o n e , d a t e / 2 0 2 1 ] _ { \subsetneq { d a t e } }$ includes all coordinates under date/2021 based on an inclusion relation $\subset _ { d a t e }$ on coordinates at the date dimension. Papers at $\langle t o p i c / C S / D B / I N D E X , d a t e / 2 0 2 1 / 0 1 \rangle$ will be included at points $\langle t o p i c / C S / D B / I N D E X , d a t e / 2 0 2 1 \rangle$ , $\left. t o p i c / C S / D B , d a t e / 2 0 2 1 / 0 1 \right.$ , and $\left. t o p i c / C S / D B , d a t e / 2 0 2 1 \right.$ . A SQL-like query statement is designed to implement subspace query with aggregation. The detailed syntax of the SQL-like query statement is presented in Appendix 1. # C. Hardness of Indexing Points in Resource Space for Subspace Query Subspace query with aggregation is equivalent to calculating intersections between the resource sets of points within each range at different dimensions. Theorem 1. Subspace query with aggregation operator follows the distributive law of the intersection operator: $R ( R S ( \{ a g g ( S _ { 1 } ) , a g g ( S _ { 2 } ) , . . . , a g g ( S _ { n } ) \} ) ,$ ) $\ l =$ $\begin{array} { r } { \bigcap _ { i = 1 . . n } R \left( R S \left( a g g \left( S _ { i } \right) \right) \right) } \end{array}$ . Proof. According to the definition of subspace query with aggregation, $\begin{array} { c c c c c c } { a g g ( S _ { i } ) } & { = } & { \{ < } & { p , R ( p ) } & { > } & { | p } & { = } \end{array}$ $\langle c _ { 1 } , . . . , c _ { i } , . . . , c _ { n } \rangle$ and $\begin{array} { r l r } { R ( p ) } & { { } \ = \ } & { \cup _ { s } R ( s ) } \end{array}$ and $R ( s ) \quad \neq \quad$ $\varnothing$ and for $\begin{array} { r l r l } { s } & { { } = < } & { s _ { 1 } , . . . , s _ { i } , . . . , s _ { n } } & { { } > } \end{array}$ such that $s _ { i }$ ⊂Xi $c _ { i } ~ \subset _ { X _ { i } } ~ e _ { i }$ for $e _ { i } \in S _ { i }$ for $i = 1 , . . . , n \}$ , which requres that $c _ { i } , s _ { i }$ , and $e _ { i }$ belong to $S _ { i }$ for all dimensions. As $a g g ( S _ { i } )$ requires only $\mathit { c } _ { i } , \mathit { s } _ { i }$ and $e _ { i }$ belong to $S _ { i }$ with the same relations, an intersection of $R ( R S ( a g g ( S _ { i } ) )$ for $i = 1 , \cdots , n$ will limit $c _ { i } , \ s _ { i }$ and $e _ { i }$ of points within the intersection belonging to $S _ { i }$ for all dimensions. □ According to Theorem 1, the searching space of a subspace is a production of coordinates within a range at each dimension, which forms a large subspace of size $| S _ { 1 } | \times \cdots$ $\begin{array} { r } { \times | S n | = \prod _ { S _ { i } \in S } | S _ { i } | } \end{array}$ . The hardness of processing a subspace query comes from the cost of calculating intersections of points within ranges at different dimensions. It is necessary to build an efficient index for subspace aggregation query. Any non-empty intersection of ranges can be represented by a set of points in the space, which also has relations with their parent points in the space. Thus, indexing nodes are also points in the space and partial order relations can be used as indexing links from indexing nodes to non-empty points of an intersection. A range at one dimension induces a sub-coordinate tree with the root node starting from the upper bound coordinate and leaf nodes above the lower bound coordinate. The result of a subspace query corresponds to many such trees whose resources are shared by all sub-trees at different dimensions. Each point has many parent points, and a query can follow different paths to reach non-empty points. When a point is obtained through one of its parent nodes on a path, the cost is the number of resources the parent point holds because the arent node needs to locate the target point from all child points it has. If an indexing node holds exact target points within the querying range, the intersection cost can be omitted for the points. Thus, the cost of processing a query from a root point to a non-empty point on the index is the summation of nodes’ number of child on the path on which each indexing node has a link to one of its child points that holds the target point until reaching the final target point. An index should have bounded number of indexing nodes to achieve efficient query process. Otherwise, the cost of building index can be very expensive. This feature can be achieved by a classical index where each indexing node has a constant number of child links and one of its child links can help narrow down the search space until reaching the final target. This cannot be achieved in a RS with partial order relations because there is no partial order relation between child points of an indexing node. For example, topic $' C S / D B$ can have ten child coordinates, each having a partial order relation with topic/ $\ C S / D B$ . This cannot be represented by two links because child points are independent. However, in a metric space, child points of an indexing node can be represented by two bounds. For example, an indexing node have many child points within an integer range [0, 100], but it can just use two sub-links, one pointing to $[ 0 , 5 0 ]$ and another to [50, 100] to partition the child points because half of child points of $[ 0 , 1 0 0 ]$ can be represented by a range $[ 0 , 5 0 ]$ due to the linear full order relation on coordinates. Thus, an index with constant indexing links can be built in a metric space. Although a full order can be imposed on coordinates, but it cannot represent partial order relations. Thus, building an index in a resource space with partial relations on points needs to balance the index size with query processing efficiency. However, it is a hard problem to build an index of inclusion relations among points with a bounded size. Theorem 2. Building an index $T$ of inclusion relations with a bounded size $| T | \leq O ( n ^ { c } )$ to achieve the minimum querying cost $_ { c o s t ( T ) }$ for subspace query is an NP-hard problem. Proof. In a Resource Space, each dimension $X _ { i }$ has a tree $G _ { i }$ of partial order relations between coordinates. Given a point $p = < c _ { 1 } , . . . , c _ { n } >$ that has resources, $c _ { i }$ has a path $P _ { i }$ to its root of the dimension tree $G _ { i }$ . Then, the ancestor points of $p$ are the set $U = \{ s | s \in 2 ^ { P 1 \times P 2 \cdots \times P n } \}$ . Obviously, we cannot build an index link from each $s$ of $U$ to $p$ . For a group $D$ of such non-empty points, each point $p _ { i }$ will have such a $U _ { i }$ that contains all ancestors of $p _ { i }$ . As each $s$ in $U _ { i }$ also has inclusion relations with others in $U _ { i }$ , a lattice graph on $S _ { i }$ is formed and all $S _ { i }$ forms a larger lattice $G _ { S }$ from the root $r$ of the space to each $p _ { i }$ in $D$ . When an index is built with inclusion relations, it should consist of all points directly holding resources and the root $r$ of space, together with a subset $S { = } \{ s _ { i } ~ | s _ { i } \in G _ { S } ~ \}$ to form a sub-lattice graph $T$ with $r$ being the root, $p _ { i }$ being the leaf nodes, $s _ { i }$ being the internal indexing nodes and each link of $T$ represents an inclusion relation between the two end nodes. The cost of searching for a point $p _ { i }$ is the sum of the numbers of child links of points on a path from $r$ to $p _ { i }$ . If each coordinate tree has a bounded depth, the searching depth on the index $T$ is also bounded by the depth of the deepest tree among all $G _ { i }$ . Thus, the cost of building the index and processing a query is determined only by the number of indexing nodes $| T |$ and the cost of determining the child node at each indexing node on the path leading to $p _ { i }$ , i.e., $c o s t ( T ) =$ $\sum _ { P _ { k } \subset T } \sum _ { s _ { i } \in P _ { k } } \left| s _ { i } \right|$ . Then, building such an index $T$ to obtain the optimal subspace query costs with bounded number of indexing nodes, $| T | \leq O ( n ^ { c } )$ with $n = | D |$ can be modeled as $a r g m i n _ { T C G }$ , $_ { 0 \leq | T | \leq O ( n ^ { c } ) } { \sum _ { P _ { k } \subset T } } \sum _ { s _ { i } \in P _ { k } } \left| s _ { i } \right|$ , which is a hard problem. This can be modeled as the constrained shortest path problem that tries to find a path from a source node $s$ to a target node $t$ with bounded resources and optimal cost on the weighted graph. Specifically, each parent node in $G _ { s }$ has a cost that can be obtained by summing the number of nonempty points along the paths to the root of the space so that each link has a number of resources summed from its child point to its parent point in the space. Then, the cost $c o s t _ { i j }$ of a link from $i$ to $j$ can be defined as $c o s t _ { i j } = 1 - | R ( l _ { i j } ) | / | R ( j ) |$ , where $R ( l _ { i j } )$ represents the resources accumulated from point $i$ to $j$ . If $| R ( l _ { i j } ) | = | R ( j ) |$ , $c _ { i j } = 0$ . As $T$ contains a path from $r$ to each non-empty point $p _ { i }$ , the path $P _ { i }$ from $r$ to $p _ { i }$ with the minimal cost $\textstyle \sum _ { c _ { i j } \in P _ { i } } c _ { i j }$ will also make the $\mathtt { c o s t } ( T )$ minimal. That is, if we can find a weighted shortest path from $r$ to $p _ { i }$ , we can make $\mathtt { c o s t } ( T )$ minimal by combining all such paths into $T _ { \mathrm { { + } } }$ , where each path from $r$ to $p _ { i }$ has the minimal cost. Moreover, the weight $w _ { i j }$ of each link $l _ { i j }$ is equal to 1. That is, all links has the same weight. Then, $| \bar { T } | ~ = ~ \Omega ( \Sigma _ { w _ { i j } \in T } w _ { i j } ) ~ \leq ~ n ^ { c }$ indicates that the total weight should be bounded. That is, if we can build an index $T$ with bounded size and optimal costs within polynomial time, we S Point query:SELECT title FROM<topic/CS/DB,date/2021>FROM RS Subspace query: SELECT title, citation FROMsubspace([dimension -topic,range-[none, topic/CS/DB], rel=subclU)],siedtete4],ar Rs Venue dimension Topic dimension Date dimension DB { TKDETPAMI TIP' RL IndexML NLP Venuc 山 Venue Topic K 3 Topic Date RSM RSM2 RSM 2 RSMSchemaMapping Kiy1 KV Kvn Key-Valuedatabase Hadoop DistributedFile System can obtain a constrained weighted shortest path for each pair of $r$ and $p _ { i }$ , which is an NP hard problem even when $G$ is a DAG [5]. □ # III. INDEX FOR SUBSPACE QUERY WITH AGGREGATION # A. Store Points with Resources in Key-value Database As key-value document database can efficiently handle storage and query of documents through corresponding unique keys [6], using point coordinates of resources as keys to store resources in a key-value document database is a way to efficiently implement resource storage and query based on their point coordinates. A subspace query can be mapped into multiple key-value queries with the coordinates of points in the subspace as keys. Figure 1 is the system framework for storing an RS on a keyvalue database. The underlying layer is the key-value database deployed on a distributed file system. Multiple RS instances can be stored on the key-value database as the RSM instance layer. The schema layer defines the structure of resource space. The top layer is the query interface. The right-hand part is the index structure to support efficient subspace query with aggregation by mapping a query into subspace points and keys of the underlying key-value database to obtain resources. Locating a resource by its ID $r$ and coordinate p requires to find the resource set $R ( p )$ first by calling a key-value query on the key-value document database using $p$ as the key, which takes $O ( 1 )$ time and then finding $r$ from $R ( p )$ requires another $O ( | R p ( p ) | )$ steps. However, key-value document database does not consider the partial order relations of coordinates for processing a subspace query with aggregation. For a point $p = < c 1 , . . . , c n >$ and a subspace query on $R S ( S ) = S _ { 1 } \times \cdot \cdot \cdot \times S _ { n }$ with $\begin{array} { r c l } { S _ { i } } & { = } & { \left[ l i , u i \right] } \end{array}$ , it needs to check if $\begin{array} { c c c c c } { { l _ { i } } } & { { \subset } } & { { X _ { i } c _ { i } } } & { { \subset } } & { { X _ { i } u _ { i } } } \end{array}$ according to $\mathbf { \Sigma } \subset \mathbf { \Sigma } _ { X _ { i } }$ for determining if $p$ is in the query result. A reachability matrix can be built for $\subset \ { \boldsymbol { \mathbf { \mathit { \chi } } } } _ { \boldsymbol { \mathbf { \mathit { \chi } } } _ { i } }$ so that $l _ { i } \subset \ l _ { X _ { i } } c i \subset \ l _ { X _ { i } } u _ { i }$ can be immediately checked, it still needs $N \times D$ comparisons for $N$ resources with $D$ dimensions to obtain resources within the range, where many points outside the range needs to be checked. To facilitate a subspace query on multiple dimensions, a resource with coordinate $p = < c 1 , c 2 , . . . , c n >$ can be mapped into multiple key-value pairs by using each coordinate $c _ { 1 } , c _ { 2 } , . . . , c _ { n }$ as a key. A subspace query can be decomposed into an intersection of a set of sub-queries on each dimension, but storing resources at each of its coordinates can make resource set at each coordinate too large to calculate intersection. An index can be built to index resources at points of a subspace to their corresponding keys in the key-value document database to help efficiently locate points within a subspace with lower cost of calculating intersection according to the inclusion and partial order relation among points. # B. Subspace Index A graph index is represented as $G = < V , E , R S ( X ) > _ { \mathrm { { \ell } } }$ , where $V$ a set of non-empty points in resource space $R S ( X )$ with partial order definitions on dimensions and indexing nodes added according to inclusion link between coordinates and intersection link between points, and $E$ is the set of indexing links. A reachability matrix $M _ { i }$ is built for each dimension to find descendant coordinates at each dimension $X _ { i }$ . New vertices and indexing links are built when more resources are added. Vertices in $G$ are connected by the following types of indexing links in $E$ : (1) inclusion link, an inclusion link $l _ { d } = < s , p , | R ( s ) | )$ can be added for two given points $s = < s _ { 1 } , . . . , s _ { m } >$ and $p = <$ $c _ { 1 } , . . . , c _ { n } >$ if $p \subset s$ , where $| R ( p ) |$ is the number of resources at $s , s$ and $p$ are two indexing nodes. $l _ { d }$ is one-direction link. The reverse direction of ld represents an ancestor relation. (2) intersection link, two intersection links $\begin{array} { r l } { l _ { I } } & { { } = < } \end{array}$ $c _ { i } , p _ { i j } , \Sigma _ { p } | R ( p ) | ~ >$ and $l _ { J } = < ~ c _ { i } , p _ { i j } , \Sigma _ { p } | R ( p ) | ~ >$ can be added to indicate that resources at point $p _ { i j }$ belongs to both $c _ { i }$ and $c _ { j }$ at dimension $X _ { i }$ and $X _ { j }$ s.t. $c _ { i } \subset { ^ k _ { X i } c _ { i s } }$ , $c _ { j } \subset { k _ { \mathit { x } } } c _ { j t }$ , and $c _ { i s }$ and $c _ { j t } \in p _ { i j }$ .An inclusion link can also induce an intersection link. Following link $l _ { I }$ and link $l _ { J }$ , two coordinates at two dimensions that have non-empty intersection can be obtained. (3) order link, an order link $l _ { o } = < c _ { i 1 }$ , $c _ { i 2 } , \leq { _ { x i } } >$ can be added if there is a partial order $\leq { _ { x i } }$ between $c _ { i 1 }$ and $c _ { i 2 }$ and there are resources with coordinate $c _ { i 1 }$ and $c _ { i 2 }$ . (4) short cut link, a short cut link $l _ { o } = < c _ { i 1 } , c _ { i 2 } , \leq _ { x i } >$ can be added to indicate that there is a path $\le _ { X i } ^ { k }$ of relation type $\leq _ { X i }$ from $c _ { i 1 }$ to $c _ { i 2 }$ . # C. Generating Basic Index The basic index, consisting of inclusion links between non-empty coordinates at each dimension, is created by the following steps: For each point $p = < c _ { 1 } , . . . , c _ { n } >$ in key-value database, (1) create an index node $v$ with $p$ as its ID. (2) link resources at the point $p$ to $v$ by inclusion links. (3) create an index node $c _ { i }$ for each coordinate of the point with the coordinate $c _ { i }$ as the node ID and link it to $v$ by an inclusion link. S=[none,topic/CS/DB] S=[date/2021/01,date/2021/02] inclusion links in Topic interetion ihs √ date Shortcut links in black color ↓ cs BIO 2021 2022 DB At--<topic/CS/DB,date/201> 白 Month RL Index ML NLP 2405 TT Dd 112 . 5 datg/202101> Points within the subspace Points outside the subspace Points within the subspace Points outside the subspace (4) insert the index node $c _ { i }$ into a path from the root coordinate $T _ { i }$ to $c _ { i }$ on the graph index according to the coordinate tree. In this way, a tree of indexing nodes is built for each dimension Xi to represent inclusion relations on coordinates at the dimension, where each coordinate corresponds to one index node that links to non-empty points with resources. Thus, a query on a resource at point $p = < c 1 , . . . , c n >$ can start form the root Ti of dimension Xi to reach index node of coordinate ci and locate index node of $\boldsymbol { \mathrm { ~ p ~ } }$ for the target resources. For example, Figure 2 contains a graph index consisting of two coordinate trees, the left side one for the topic dimension and the right side one for the date dimension. A query on a point $< ~ t o p i c / C S / D B / I n d e x , d a t e / 2 0 2 1 / 0 1 ~ >$ starts from the root of topic dimension to reach the index node $< ~ t o p i c / C S / D B / I n d e x , d a t e / 2 0 2 1 / 0 1 ~ >$ shown as the rectangular in green color in the lower left corner of Figure 2. # D. Generating Basic Index To reduce the cost of calculating intersections between coordinates at different dimensions when processing a subspace query, three heuristic rules are used to select coordinate pairs at two dimensions to build intersection links between the two coordinates as indexing nodes. Heuristic Rule 1. The greater difference between the number resources hold by two coordinates, the higher probability of building an intersection link between them than that of those holding the closer number of resources. Heuristic Rule 2. If the two coordinates are at different levels of the coordinate trees at corresponding dimensions, an intersection link is built between them with a higher probability as they have a higher probability to hold bigger different number of resources than those at the same level. For each node $v$ on the basic index, intersection links are added according to the above two heuristic rules by following steps: (1) For each coordinate $c _ { i }$ on the path from the root at one dimension to $v$ , locate one coordinate $c _ { j }$ on the path at another dimension. (2) Construct weight vectors representing the level and the number of resources of $c _ { i }$ and $c _ { j }$ respectively. (3) Calculate a Mahalanobis distance between the two weight vectors as an indicator to reflect how cost of calculating intersection between the two coordinates can be reduced when intersection links are built between them. (4) Calculate a probability of adding intersection links between two coordinates based on the distance by a logistic function so that two coordinates with the greater distance have a higher probability of adding an intersection link. (5) Randomly sample a real number according to the probability of adding intersection links to determine whether inter section links should be added between $c _ { i }$ and $c _ { j }$ . (6) Add an intersection index node $p _ { i j }$ and link it to $c _ { i }$ and $c _ { j }$ respectively by intersection links. The advantage of using Mahalanobis distance is that the correlation matrix in the distance function can make two vectors orthogonal and normalized with respect to the distribution of the level and number represented in the weight vector. The probability of adding intersection links between two coordinates at two dimensions ensures that the number of links is under control. The detailed approach to adding intersection link is introduced in section (1) of Appendix 2. # E. Strategy of Adding Short-Cut Link Short cut links are added between coordinates with partial order relations at a dimension so that a range can be quickly located by quickly jumping between coordinates at the dimension. A subspace query following partial order relation can be completed within logarithmic scale when the coordinate tree has logarithmic scale of depth with respect to the number of points. Figure 2 shows an example of short cut links (in red color) appended between year coordinates, season coordinates and moth coordinates at the date dimension. The detailed approach to adding short-cut link is presented in section (2) in Appendix 2. # F. Strategy for Splitting Index Node When resources under an indexing node at one dimension are accumulating, a set of coordinates at another dimension is used to split the node into a set of sub-indexing nodes. Although the splitting can increase the depth of the index, it can help make resources more evenly distributed among indexing nodes, reduce the costs of scanning resources within an indexing node and reduce the cost of calculating intersections. The basic strategy is to split a child node that holds more resources than other child nodes. A probabilistic approach is used to determine whether splitting a node without requiring a global imbalance indicator. The detailed approach is introduced in section (3) of Appendix 2. # G. Index Generation Process A graph index is generated according to the given resource space with resources by the following steps: (1) Create an XML file X to record the dimensions of resource space and points of resources in the space according to the given resource space. (2) For each resource in the resource space and X, store it in the key-value database according to its point as key. S=[none,topic/CS/DB] S=[date/2021/01,date/2021/02] inclusion links in query path Short cut links in Topic √ date redcolor black color 5 cs BIO 2021 2022 DB … H <topic/CS/DB,date/2021> Season Month RLIndex ML NLP 1 To T 中 1 2 5 2021/01> /N Points within the subspace Points outside the subspace Points within the subspace Points outside the subspace (3) Call the algorithm presented in Figure 7 in Appendix 2 with access API of the key-value database. (4) Output the graph index G in XML, which can be loaded into memory for query use. The above process is also used as maintenance process when new resources are added to the resource space. # H. Index Generation Process A point query point $\cdot q = < c _ { 1 } , . . . , c _ { n } > )$ can be immediately processed by locating the node with ${ \mathrm { I D } } < c _ { 1 } , . . . , c _ { n } >$ on the index from the root of any dimension $X _ { i }$ by following inclusion links. If there is no such node, the result is empty. A subspace query on a subspace $R S ( S )$ with $\begin{array} { r l } { S } & { { } = } \end{array}$ $S _ { 1 } = [ l _ { 1 } , u _ { 1 } ] , . . . , S _ { n } = [ l _ { n } , u _ { n } ]$ at dimensions can be processed in a greedy way: (1) Starting from the root index node of the tree at each dimension $X _ { i }$ , traverse the tree to locate each coordinate index node within the range $S _ { i }$ . (2) Check each coordinate index node $c _ { i }$ within the range to locate those intersection links within the range. (3) For each intersection link $l$ within $R S ( S )$ from $c _ { i }$ , following the link $l$ to find the non-visited intersection index node within $R S ( S )$ . (4) For each non-visited intersection index nodes, record the points within $R S ( S )$ in the result, mark the intersection index node as visited. (5) For each point recorded in the result set, copy resources to their parent coordinates by following the inclusion links of each dimension. Figure 3 illustrates a subspace query on two ranges within the topic dimension and the date dimension. The coordinates within each range are circled. The query paths are in bold black color. The query processing starts from the root index node at the two dimensions respectively. An intersection node $< t o p i c / C S / D B , d a t e / 2 0 2 1 >$ can be reached from either tree, and points within the range can be obtained by following links from the index node $< t o p i c / C S / D B , d a t e / 2 0 2 1 > .$ . Only two points within the range are visited from the intersection links to the index node $< t o p i c / C S / D B , d a t e / 2 0 2 1 >$ . If there is no such intersection links, one would have to conduct intersection calculation on six points. The traversing process will ensure that all points with resources within $S$ will be collected and aggregated along paths of coordinates. Resources are aggregated along the inclusion paths of the traversing process so that each point can have resources from their descendant points. The Algorithm Fig. 4. Coordinate tree distribution. Fig. 5. Match count of different approaches for querying random points. Subspace for querying with aggregation is presented in Figure 9 in Appendix 2. Theorem 3. The traversing process starting from one dimension along inclusion links and ancestor links on $G$ can completely reach points within subspace $S$ . Proof. It can be derived from the construction steps. The index G constructed by Algorithm 1 can be deemed as a set of trees connected by non-empty points with resources. Thus, a traversing process will only cover those non-empty points within the querying subspace $S$ . The starting dimension will have all points within its range and will be selected according to ranges of other dimensions. Following links from points, all coordinates within all dimensions can be accessed for aggregation. □ A query on the graph index can be deemed as a search process that tests each coordinate of each resource one by one to check if it is within a range Si. The speed up of search process obtained by the basic index is that it can help skip empty points. Intersection links can further reduce intersection calculation. # IV. EXERIMENTAL EVALUATION An experiment is conducted to verify the subspace query performance on the index. The ACM CCS category tree used as a topic dimension has wide width but shallow depth. The distribution of depths is shown in Figure 4. A two-dimensional $R S$ instance with date dimension and topic dimension is built and a set of randomly produced resources are added to the $R S$ instance to evaluate the query performance. TFIDF and inverted index are compared. As shown in Figure 5(a), the subspace query with non-empty point index can help improve the query processing efficiency compared with TFIDF index as it can reduce the comparison for subspace query. To demonstrate the efficiency of the graph index structure, a set of subspace queries are conducted on the index with intersection links and short cut links to show how those links can help improve the efficiency of query. The results are shown in Figure 5 (b). In many cases, the intersection links and short cut links can help reduce the times of comparisons. # V. RELATED WORK # A. Relational Data Model Classical relational database queries efficiently implement queries on attributes based on functional dependencies [7]. →→keypath (a) compared with tfidf and key path (b) comparison with different links Traditional multidimensional data models regard a given set of attributes of data items as dimensions and values of corresponding attributes of data items as coordinates of dimensions. Data Warehouse Online Analytics (OLAP) implements hierarchical aggregation query on multi-dimensional continuous numerical data [8]. The major difference between OLAP and relational database query is: when treating attributes as dimensions, the coordinates of a dimension are organized in a hierarchical structure such that the value at a coordinate can include values from its sub-coordinates. This hierarchical structure cannot be represented by classical relational data model. Data cube was proposed to represent the aggregation query on multidimensional quantitative data [9]. The classic data cube model in OLAP provides roll up and drill down query operators to aggregate data values from coordinates of lower levels to coordinates of higher levels. Classic OLAP queries are defined on data cubes that contain only quantitative measurements and the partial order relation is defined between attributes. Text-cube generalized data cube by allowing text sets to be stored in a multidimensional data cube where partial order relation is defined between attributes [10]. These works do not support aggregation operation at dimensions with irregular coordinate structure. In general, most multidimensional data models focus on managing and exploring values in a multidimensional space. Only quantitative measurements can be kept and aggregated in multidimensional data query. It is still a challenge to implement range query and aggregation operation in a multidimensional data space. # B. Resource Space Model The Resource Space Model is to manage discrete resources in a multidimensional space where each dimension is a fine partition of another dimension so that the space can be partitioned into exponentially smaller space when the number of dimension increases [2]. Queries with hierarchical coordinates at different dimensions is an important part of implementing the Resource Space Model. A multidimensional Resource Space consists of dimensions with coordinates and each resource can be located through its coordinates at dimensions in the space. A partial order relation can be defined on coordinates of each dimension so that a tree of coordinates can be formed and each path on the tree represents the inclusion relations between nodes on the path. A Resource Space is a space of discrete resources where each dimension has coordinates for locating resources [2]. # C. Multidimensional Index Constructing multidimensional indexes is an effective way to support range query and neighborhood query in lowdimensional data space [11]. R-tree extends the idea of $\mathbf { B } + \mathbf { \ell }$ tree to the d-dimensional space, using minimum bounding box to store multidimensional data points, and constructing a tree index using the containment relation between minimum bounding boxes to improve the efficiency of multidimensional data interval queries [12]. An R-tree indexing approach based on the non-overlapping intervals can enhance the efficiency of data insertion and querying [13]. K-D tree uses data points to construct hyperplanes and divides the space by hyperplanes to form a balanced binary tree index. Due to the uneven distribution of point positions, the constructed hyperplanes can be uneven, which leads to the decrease of the efficiency of range queries. To solve the problem of uneven K-D tree, QuadTree divides each dimension into two intervals then two dimensions form a hyperplane with four equal-sized subhyperplanes. Each hyperplane containing more than one node can be further divided so that the subspace with more data can be split into more subspaces to form a more balanced tree index [14]. Classical range query operator is defined in a multidimensional data space to obtain data items within the range supported by various index such as R-tree and KD-tree [5], constructed on a space distance metric for partitioning data sets into hierarchical groups on indexing nodes. Range query in distributed system can aggregate resources from different peers in the system with a certain topology and data location schema [15], but it lacks dimension specification with hierarchical coordinates at dimensions. LMS index has been widely studied and used in key-value database to support key-range query by storing key-range in a logged memory and merged in a tree structure when necessary [16]. How to implement discrete resource aggregation along hierarchical coordinates has not been addressed. Multi-dimensional space indexes constructed on distance metrics can avoid traversing all data to achieve high efficiency for neighborhood query in linear high-dimensional spaces [17], but it cannot apply to a non-linear multi-dimensional space consisting of discrete data and discrete attributes. Most indexes on quantitative data in a multidimensional space with linear coordinates at dimensions cannot handle the aggregation operation of discrete resources within a range on a partial order relation between coordinates.
Organizing resources in a multidimensional classification space is an approach to efficiently managing and querying large-scale resources. This paper defines an aggregation query on subspace defined by a range on the partial order on coordinate tree at each dimension, where each point contains resources aggregated along the paths of partial order relations on the points so that aggregated resources at each point within the subspace can be measured, ranked and selected. To efficiently locate non-empty points in a large subspace, an approach to generating graph index is proposed to build inclusion links with partial order relations on coordinates of dimensions to enable a subspace query to reach non-empty points by following indexing links and aggregate resources along indexing paths back to their super points. Generating such an index is costly as the number of children of an index node can be very large so that the total number of indexing nodes is unbounded. The proposed approach adopts the following strategies to reduce the cost: (1) adding intersection links between two indexing nodes, which can better reduce query processing costs while controlling the number of nodes of the graph index; (2) intersection links are added between two nodes according to the probabilistic distribution calculated for estimating the costs of adding intersection between two nodes; (3) coordinates at one dimension having more resources are split by coordinates at another dimension to balance the number of resources hold by indexing nodes; and, (4) short-cut links are added between sibling coordinates of coordinate trees to make an efficient query on linear order coordinates. Analysis and experiments verified the effectiveness of the generated index in supporting subspace aggregation query. This work makes significant contributions to the development of data model based on multi-dimensional classification.
[ "cs.DB", "cs.AI" ]
# 1. Introduction Koopman operator theory provides a framework for nonlinear dynamical system analysis and timeseries forecasting by mapping dynamics to a space of real-valued measurement functions, enabling a linear operator representation. Despite the advantage of linearity, the operator is generally infinite-dimensional. Therefore, the objective is to learn measurement functions that yield a tractable finite-dimensional Koopman operator approximation. In this work, we establish a connection between Koopman operator approximation and linear Recurrent Neural Networks (RNNs), which have recently demonstrated remarkable success in sequence modeling. We show that by considering an extended state consisting of lagged observations, we can establish an equivalence between a structured Koopman operator and linear RNN updates. Building on this connection, we present SKOLR, which integrates a learnable spectral decomposition of the input signal with a multilayer perceptron (MLP) as the measurement functions and implements a structured Koopman operator via a highly parallel linear RNN stack. Numerical experiments on various forecasting benchmarks and dynamical systems show that this streamlined, Koopman-theorybased design delivers exceptional performance. Our code is available at: https://github. com/networkslab/SKOLR. Time-series prediction and analysis of nonlinear dynamical systems remain fundamental challenges across various domains. Koopman operator theory (Koopman, 1931) offers a promising mathematical framework that transforms nonlinear dynamics into linear operations in the space of measurement functions. Practical implementation of this theory faces significant challenges due to the infinite dimensionality of the resulting linear operator, necessitating finite dimensional approximations. The dynamic mode decomposition (DMD) and its variants are the most widely employed approximations (Rowley et al., 2009; Schmid, 2010; Williams et al., 2015), although alternative techniques have emerged (Bevanda et al., 2021; Khosravi, 2023), including ones employing learnable neural measurement functions (Li et al., 2017) In parallel developments, linear Recurrent Neural Networks (linear RNNs) have emerged as a powerful architecture in deep learning and sequence modeling (Stolzenburg et al., 2018; Gu & Dao, 2023; Wang et al., 2024b). These models leverage the computational efficiency of linear recurrence while maintaining impressive modeling capabilities. In this work, we consider the task of time-series forecasting and establish both an explicit connection and a direct architectural match between Koopman operator approximation and linear RNNs. In particular, we show that by representing the dynamic state using a collection of time-delayed observations, we can establish an equivalence between the application of an extended DMD-style approximation of the Koopman operator and the state update of a linear RNN. Building on this connection, we introduce Structured Koopman Operator Linear RNN (SKOLR) for time-series forecasting. SKOLR implements a structured Koopman operator through a highly parallel linear RNN stack. Through a learnable spectral decomposition of the input signal, the RNN chains jointly attend to different dynamical patterns from different representation subspaces, creating a theoretically-grounded yet computationally efficient design that naturally aligns with Koopman principles. Through extensive experiments on various forecasting benchmarks and dynamical systems, we demonstrate that this streamlined, Koopman-theory-based design delivers exceptional performance, while maintaining the simplicity of the linear RNN and its outstanding parameter efficiency. # 2. Preliminary This section provides foundational background for the proposed forecasting methodology. We define the discrete-time dynamical systems used to model observed time series and then introduce the Koopman operator. Definition 2.1 (Discrete-time Dynamical Systems). We consider the (autonomous) discrete-time dynamical system: $$ \mathbf { x } _ { k + 1 } = \mathrm { F } ( \mathbf { x } _ { k } ) $$ where $\mathbf { x } _ { k } \in \mathcal { M }$ denotes the system state at time $k \in \mathbb { Z } ^ { + }$ ; and $\operatorname { F } : { \mathcal { M } } \to { \mathcal { M } }$ represents the underlying dynamics mapping the state forward in time. We assume a Euclidean state space $\mathcal { M } \subset \mathbb { R } ^ { C }$ , although it can be more generally defined on an $n$ -dimensional manifold (Bevanda et al., 2021). The Koopman operator framework enables globally linear representations of nonlinear systems by applying a linear operator to measurement functions (observables) $g$ of the state ${ \bf x } _ { k }$ . The following theorem formalizes key properties of the Koopman operator. Theorem 2.2 (Koopman Operator Theorem (Koopman, 1931; Brunton et al., 2022)). Considering real-valued measurement functions (a.k.a. observables) $g : \mathcal { M } \mathbb { R } ,$ , the Koopman operator $K : { \mathcal { F } } { \mathcal { F } }$ is an infinite-dimensional linear operator on the space of all possible measurement functions $\mathcal { F }$ , which is an infinite-dimensional Hilbert space, satisfying: $$ \mathcal { K } \circ g = g \circ \mathrm { F } , $$ where $\circ$ is the composition operator. In other words, $$ \begin{array} { r } { \mathcal { K } ( g ( \mathbf { x } _ { k } ) ) = g ( \mathrm { F } ( \mathbf { x } _ { k } ) ) = g ( \mathbf { x } _ { k + 1 } ) . } \end{array} $$ This is true for any measurement function $g$ and for any state ${ \bf x } _ { k }$ .1 While this facilitates analysis via linear maps, Koopman operator is generally infinite-dimensional, acting on a Hilbert space of functions. For practical learning and inference, we seek effective finite-dimensional approximations. In this paper, we construct these approximations efficiently by leveraging a connection to linear recurrent neural networks (RNNs). For clarity, we now define a linear RNN. Definition 2.3 (Linear Recurrent Neural Network (Stolzenburg et al., 2018)). Consider a hidden state space $\mathcal { H } \subseteq \mathbb { R } ^ { d _ { h } }$ and input space $\boldsymbol { \nu } \subseteq \mathbb { R } ^ { d _ { v } }$ . For any sequence $( \mathbf { v } _ { k } ) _ { k = 1 } ^ { L } \in \mathcal { V }$ the linear RNN defines a discrete-time dynamical system through the hidden state transition equation: $$ \mathbf { h } _ { k } = \mathbf { W } \mathbf { h } _ { k - 1 } + \mathbf { U } \mathbf { v } _ { k } + \mathbf { b } $$ where $\mathbf { W } \in \mathbb { R } ^ { d _ { h } \times d _ { h } }$ is the hidden state transition matrix, $\mathbf { U } \in \mathbb { R } ^ { d _ { h } \times d _ { v } }$ is the weight matrix applied to the input, and $\mathbf { b } \in \mathbb { R } ^ { d _ { h } }$ is the bias vector. The evolution of hidden states ${ \bf h } _ { k } \in \mathcal { H }$ is uniquely determined by this linear map. In order to prepare the connection to our proposed Koopman operator learning strategy and forecasting method, let us introduce $\mathbf { v } _ { k } : = \psi ( \mathbf { y } _ { k } )$ and define $g ( \mathbf { y } _ { k } ) : = \mathbf { U } \psi ( \mathbf { y } _ { k } ) + \mathbf { b }$ for a suitable function $\psi$ . Then, if $g ( \mathbf { y } _ { k - s } ) = 0$ for $s > L$ , we can unroll Eq. 4 to the following form: $$ \mathbf { h } _ { k } = g ( \mathbf { y } _ { k } ) + \sum _ { s = 1 } ^ { L } \mathbf { W } ^ { s } g ( \mathbf { y } _ { k - s } ) . $$ Here $\mathbf { W } ^ { s }$ indicates $s$ applications of $\mathbf { W }$ . # 3. Methodology # 3.1. Problem Statement Let us denote $L$ steps of the trajectory of a discrete-time dynamical system as $\mathbf { x } _ { 1 } , \ldots , \mathbf { x } _ { L }$ . We focus on the setting where $\mathbf { x } _ { k } \in \mathcal { X } \subseteq \mathbb { R } ^ { C }$ . We do not directly observe $\mathbf { x } _ { k }$ , but instead observe $\mathbf { y } _ { k } = h ( \mathbf { x } _ { k } )$ for some unknown function $h$ We have available a set of training data consisting of multiple sequences of length $L + T$ . The inference task is to forecast the values $\mathbf { y } _ { L + 1 } , \dots , \mathbf { y } _ { L + T }$ given observations of the first L values of a sequence y1, . . . , yL. # 3.2. Strategy: Directly observable systems We first consider the setting where we directly observe the state, i.e., $h$ is the identity, and ${ \bf y } _ { k } = h ( { \bf x } _ { k } ) = { \bf x } _ { k }$ Assume that the dynamical system can be captured by $\mathbf { x } _ { k + 1 } = \mathrm { F } ( \mathbf { x } _ { k } )$ . Then an appropriate forecasting approach is to learn a finite dimensional approximation to the Koopman operator associated with $\mathrm { F }$ , and then propagate it forward in time to construct the forecast. In this section, for notational simplicity, we describe a setting where learning is based on a single observation $\mathbf { y } _ { 0 } , \ldots , \mathbf { y } _ { L }$ (which, for now, $\mathbf { \Psi } = \mathbf { x } _ { 0 } , \dots , \mathbf { x } _ { L } )$ . The extension to learning using multiple observed series is straightforward. Let us introduce measurement functions $g _ { 1 } , g _ { 2 } , \dotsc g _ { n _ { g } } \in \mathcal { H } .$ where $\mathcal { H }$ is a Hilbert space containing real-valued functions defined on $\chi$ . Denoting $\mathcal { L } ( \mathcal { H } )$ as the space of bounded linear operators $T : { \mathcal { H } } \to { \mathcal { H } }$ , Khosravi (2023) formulates the learning of the Koopman operator as a minimization task with a Tikhonov-regularized empirical loss: $$ \operatorname* { m i n } _ { \mathbf { K } \in \mathcal { L } ( \mathcal { H } ) } \sum _ { k = 1 } ^ { L } \sum _ { l = 1 } ^ { n _ { g } } \bigg ( \mathbf { x } _ { k l } - ( \mathbf { K } g _ { l } ) ( \mathbf { x } _ { k - 1 } ) \bigg ) ^ { 2 } + \lambda | | \mathbf { K } | | ^ { 2 } . $$ Although this optimization is over an infinite-dimensional space of linear operators, Khosravi (2023) demonstrates that if the measurement functions satisfy certain conditions, then there is a unique solution $\hat { \mathrm { K } }$ , which can be derived by solving a finite-dimensional optimization problem. A special case occurs when $\hat { K }$ is invariant in the subspace spanned by the measurement functions, $\begin{array} { r l } { \mathcal { G } } & { { } = } \end{array}$ $\operatorname { s p a n } \{ g _ { 1 } , \dotsc , g _ { n _ { g } } \}$ , i.e. $\hat { K } \in \mathcal { L } ( \mathcal { G } )$ . In this setting, we can follow the approach of the Extended Dynamic Mode Decomposition (EDMD) method (Li et al., 2017), which approximates the Koopman operator by a finite-dimensional linear map $\mathrm { ~ U ~ : ~ } { \mathcal { G } } { \mathcal { G } }$ . We can represent the action of $\mathrm { ~ U ~ }$ on $g _ { l }$ using a matrix $\textbf { M } \in \mathbb { R } ^ { n _ { g } \times n _ { g } }$ . Because the dimension of $\mathcal { G }$ is finite, we can identify an $\mathbf { M }$ such that $\begin{array} { r } { \mathrm { U } g _ { l } = \sum _ { j = 1 } ^ { n _ { g } } [ M ] _ { j , l } g _ { j } } \end{array}$ . The matrix $\mathbf { M }$ can then be estimated via the following minimization: $$ \operatorname* { m i n } _ { \mathbf { M } } | | \mathbf { P } _ { G } \mathbf { M } - \mathbf { Y } | | _ { F } ^ { 2 } . $$ where $\mathbf P _ { G } = \left[ g _ { l } ( \mathbf x _ { k - 1 } ) \right] _ { k = 1 , l = 1 } ^ { L , n _ { g } }$ and $\mathbf { Y } = [ g _ { l } ( \mathbf { x } _ { k } ) ] _ { k = 1 , l = 1 } ^ { L , n _ { g } }$ When applying EDMD, we assume that $\mathbf { P } _ { G }$ is full rank so that a unique minimizer can be identified via the MoorePenrose pseudoinverse. This learning task can be made more flexible by allowing for learning of the measurement functions $g _ { l }$ . Li et al. (2017) propose a method that incorporates neural network based learning of the measurement functions and an $L _ { 1 }$ regularizer to promote sparsity. # 3.3. Unobserved states In many settings, we do not observe the state of the system $\mathbf { x } _ { k }$ directly. Instead we observe $\mathbf { y } _ { k } ~ = ~ h ( \mathbf { x } _ { k } )$ for some unknown observation function $h ( \cdot )$ . In this setting, $\mathbf { y } _ { k }$ may not provide sufficient information in isolation to recover the state $\mathbf { x } _ { k }$ . We can instead construct a state representation by considering the past $L$ measurements, i.e., we define $\widetilde { \mathbf { x } } _ { k } = [ \mathbf { y } _ { k - L + 1 } , \ldots , \mathbf { y } _ { k } ] ^ { \top }$ . With this representation, we can emodel the dynamics as $\begin{array} { r } { \dot { \widetilde { \mathbf { x } } } _ { k } = \widetilde { \mathrm { F } } ( \widetilde { \mathbf { x } } _ { k - 1 } ) } \end{array}$ and target learning of the Koopman operator $\tilde { K }$ associeated with $\tilde { \mathrm { F } }$ . Note that this permits us to perform prediction, because we are interested in predicting $\mathbf { y } _ { k + 1 }$ , which can be recovered from $\widetilde { \mathbf { x } } _ { k + 1 }$ . Under the same assumptions of invariance of the Keoopman operator with respect to $\mathcal { G }$ , we can adopt the same approach as outlined above, learning a matrix M. Given the structure of the constructed $\widetilde { \mathbf { x } } _ { k }$ , we are motivated to impose further structure on the Kooepman operator matrix, with the goal of introducing an inductive bias that can facilitate learning and make it more robust. In particular, we enforce a structure on $\mathbf { M }$ $$ \begin{array} { r l } & { \mathrm { \mathbb { M } ~ } \mathrm { \mathbf { U } } \mathrm { \mathbf { \widetilde { A } } } \mathrm { \mathbf { I } } \mathrm { I } \mathrm { I } 0 \mathrm { W s } \mathrm { \mathbf { \widetilde { u } } } \mathrm { s \mathbf { \Psi } } \mathrm { U } \mathrm { \widetilde { U } } \mathrm { \mathbf { \Psi } } \mathrm { W } \mathrm { \widetilde { t } } \mathrm { \widetilde { t } } \mathrm { \widetilde { t } } \mathrm { \Psi } \mathrm { \widetilde { t } } } \\ & { g \big ( \widetilde { \mathbf { x } } _ { k + 1 } \big ) = \mathbf { M } g \big ( \widetilde { \mathbf { x } } _ { k } \big ) = g \big ( \mathbf { y } _ { k } \big ) + \displaystyle \sum _ { s = 1 } ^ { L } \mathbf { W } ^ { s } g \big ( \mathbf { y } _ { k - s } \big ) . } \end{array} $$ With this structure, we see that $\mathbf { M }$ is a blockwise diagonal matrix, where each block is a power of a learnable matrix W. Moreover, by comparing Eq. 8 with Eq. 5, we see that this structure, which represents the dynamic state using a collection of time-delayed observations (Arbabi & Mezic, 2017), can be implemented as a linear RNN. # 3.4. SKOLR Building on our analysis of Koopman operator approximation and the connection to the linear RNN, we present SKOLR, which integrates a learnable spectral decomposition of the input signal with a multilayer perceptron (MLP) for the measurement functions. Inspired by multiresolution DMD (Kutz et al., 2016), instead of learning a single linear RNN acting on a high dimensional space, we propose to split the space into multiple subspaces, resulting in learning a structured Koopman operator via a highly parallel linear RNN stack. This structure also improves the parameter efficiency, as shown in Fig. 1 Encoder Let the input sequence be $\mathbf { Y } = [ \mathbf { y } _ { 1 } , \mathbf { y } _ { 2 } , \ldots , \mathbf { y } _ { L } ]$ where $\mathbf { y } _ { k } \in \mathbb { R } ^ { P }$ , with $P$ being the dimension of the observation. The encoder performs learnable frequency decomposition via reconstruction of the soft-gated frequency spectrum via Fast Fourier Transform (FFT) and Inverse FFT (IFFT): $$ \begin{array} { r l } & { \mathbf { S } = \mathrm { F F T } ( \mathbf { Y } ) , } \\ & { \mathbf { S } _ { n } = \mathbf { S } \cdot \mathrm { S i g m o i d } ( \mathbf { w } _ { n } ) , } \\ & { \mathbf { Y } _ { n } = \mathrm { I F F T } ( \mathbf { S } _ { n } ) . } \end{array} $$ The reconstructed signals $\{ \mathbf { Y } _ { n } \} _ { n = 1 } ^ { N }$ form $N$ parallel branches, with each branch representing a frequency-based subspace, while $\{ \mathbf { w } _ { n } \} _ { n = 1 } ^ { N }$ contain learnable parameters for frequency selection. For each branch $n$ we parameterize the measurement functions using non-linear feed-forward network: $$ \mathbf { z } _ { n , k } = \mathrm { F F N } _ { \mathrm { e n c } , n } ( \mathbf { y } _ { k } ) , { \mathrm { f o r } } k = 1 , \ldots , L $$ where $\mathrm { F F N } : \mathbb { R } ^ { P } \to \mathbb { R } ^ { D }$ . We utilize multiple layer perceptrons (MLPs) for simplicity, generally with only one layer or two. Other FFNs like wiGLU (Shazeer, 2020) are also applicable. After this encoding is complete, we have constructed the $\mathbf z _ { k } = g ( \mathbf y _ { k } )$ (and hence the $g ( \widetilde { \mathbf { x } } _ { k } ) )$ that appear in Eq. 8 for $k = 1 , \dots , L$ . By structuring ethe architecture into multiple branches and incorporating both frequencydomain filtering and time-domain encoding, we enhance flexibility in learning suitable measurement functions for diverse time-series patterns. Figure 1: Architecture of SKOLR (Structured Koopman Operator Linear RNN) The input time series goes through an encoder with learnable frequency decomposition and a MLP that models the measurement functions. With the branch decomposition, the highly parallel linear RNN chains jointly attend to different dynamical patterns from different representation subspaces. Finally, a decoder reconstructs predictions by parameterizing the inverse measurement functions. This structured approach maintains computational efficiency while naturally aligning with Koopman principles. RNN Stack Given the collection for each branch $n$ and time step $k \colon \mathbf { Z } _ { n } = [ \mathbf { z } _ { 1 , n } , \ldots , \mathbf { z } _ { L , n } ] \in \mathbb { R } ^ { D \times L }$ , we take it as input to a linear RNN and introduce learnable branchspecific weight matrices ${ \bf W } _ { n }$ for each branch. $$ \mathbf h _ { k + 1 , n } = \mathbf W _ { n } \mathbf h _ { k , n } + \mathbf z _ { k , n } $$ Each branch weight matrix ${ \bf W } _ { n }$ defines a matrix ${ { \bf { M } } _ { n } }$ , as discussed above, which specifies a finite-dimensional approximation to a Koopman operator for $\widetilde { \mathbf { x } }$ for the learned measurement functions on that branch. Together, the branch matrices ${ { \bf { M } } _ { n } }$ form a structured finitedimensional Koopman operator approximation $\widehat { \mathrm { K } }$ with block diagonal structure: $$ \widehat { \mathbf { K } } = \left[ \begin{array} { c c c c } { \mathbf { M } _ { 1 } } & { 0 } & { \cdots } & { 0 } \\ { 0 } & { \mathbf { M } _ { 2 } } & { \cdots } & { 0 } \\ { \vdots } & { \vdots } & { \ddots } & { \vdots } \\ { 0 } & { 0 } & { \cdots } & { \mathbf { M } _ { N } } \end{array} \right] $$ By imposing this structure and using a stack of linear RNNs, we can learn local approximations to the evolution dynamics of different observables. For each branch $n$ : $$ \mathbf { H } _ { n } = [ \mathbf { z } _ { 1 , n } , \mathbf { z } _ { 2 , n } + \mathbf { M } _ { n } \mathbf { z } _ { 1 , n } , \ldots , \mathbf { z } _ { L , n } + \sum _ { s = 0 } ^ { L - 1 } \mathbf { M } _ { n } ^ { s } \mathbf { z } _ { s , n } ] $$ For prediction of length $T$ , we recursively apply the operator to predict the Koopman space for future steps per branch: $$ \mathbf { H } _ { [ L + 1 : L + T ] , n } = [ \mathbf { M } _ { n } \mathbf { h } _ { L , n } , \dots , \mathbf { M } _ { n } ^ { T } \mathbf { h } _ { L , n } ] $$ Decoder For reconstruction, we use mirrored feedforward networks to parameterize the inverse measurement functions $g ^ { - 1 }$ . The decoder processes the hidden states as: $$ \hat { \mathbf { y } } _ { k , n } = \mathrm { F F N } _ { \mathrm { d e c } , n } ( \mathbf { h } _ { k , n } ) $$ where $\mathrm { F F N } _ { \operatorname* { d e c } } : \mathbb { R } ^ { D } \to \mathbb { R } ^ { P }$ . The decoder combines predictions from all branches to generate the final prediction $\hat { \mathbf { y } } _ { [ L + 1 , L + T ] }$ . The model is trained end-to-end using the loss function: $$ \begin{array} { r } { \mathscr { L } = \| \hat { \mathbf { y } } _ { [ L + 1 : L + T ] } - \mathbf { y } _ { [ L + 1 : L + T ] } \| _ { 2 } ^ { 2 } . } \end{array} $$ The structured approach, induced by both the linear RNN and the branch decomposition, enables efficient parallel processing and reduces the parameter count. Since all architectural components are very simple (basic sigmoid frequency gating, one- or two-layer MLPs for encoding/decoding, linear RNN), the architecture is very fast to train and has low memory cost, as we illustrate in the experiments section. Table 1: Prediction results on benchmark datasets, $L = 2 T$ and $T \in \{ 4 8 , 9 6 , 1 4 4 , 1 9 2 \}$ (ILI: $T \in \{ 2 4 , 3 6 , 4 8 , 6 0 \} )$ . Best results and second best results are highlighted in red and blue respectively. # 4. Experiments # 4.1. Benchmarking SKOLR # 4.1.1. DATASETS We evaluate SKOLR on widely-used public benchmark datasets. For long-term forecasting, we use Weather, Traffic, Electricity, ILI and four ETT datasets (ETTh1, ETTh2, ETTm1, ETTm2). We assess short-term performance on M4 dataset (Makridakis et al., 2020), which includes six subsets of periodically recorded univariate marketing data. For more information about the datasets see Appendix A.1. # 4.1.2. BASELINES AND EXPERIMENTAL SETTINGS We compare against state-of-the-art deep forecasting models. The comparison includes transformer-based models: Autoformer (Wu et al., 2021), PatchTST (Nie et al., 2023), iTransformer (Liu et al., 2024), TCN-based models: TimesNet (Wu et al., 2023), MICN (Wang et al., 2023a), linear model: DLinear (Zeng et al., 2023), and Koopman-based models: KNF (Wang et al., 2023b), Koopa (Liu et al., 2023). We select these representative baselines for their established performance and public implementations. Following Koopa (Liu et al., 2023), we set the lookback window length $L \ = \ 2 T$ for prediction horizon $T \in { \bf \Xi }$ $\{ 4 8 , 9 6 , 1 4 4 , 1 9 2 \}$ for all datasets, except ILI, for which we use $T \in \{ 2 4 , 3 6 , 4 8 , 6 0 \}$ . This setting leverages more historical data for longer forecasting horizons. We report baseline results from Liu et al. (2023) except for iTransformer; we reproduce iTransformer results with $L = 2 T$ using the officially released code. Performance is measured using Mean Squared Error (MSE) and Mean Absolute Error (MAE). Appendix A.2 provides implementation details. # 4.1.3. RESULTS AND ANALYSIS Table 1 reports the experimental results for eight benchmarks. The performance is measured by MSE and MAE; the best and second-best results for each case (dataset, horizon, and metric) are highlighted in bold and underlined, respectively. The results are the average of 3 trials. We rank the algorithms in Table 1 based on their MSE and order them based on their average rank across eight datasets and four prediction horizons. Figure 2 shows the relative ranks. We observe that SKOLR achieves SOTA performance, with the best average rank across all settings. Figure 2: Boxplot for ranks of the algorithms (based on their MSE) across seven datasets and four prediction horizons. The medians and means of the ranks are shown by the vertical lines and the black triangles respectively; whiskers extend to the minimum and maximum ranks. Figure 3: Analysis of SKOLR’s branch-wise behavior: (a) frequency decomposition and (b) prediction performance. We observe that different branches focus on different frequency components. Table 2: Model evaluation results (MSE/MAE) on nonlinear dynamical systems (NLDS) The model shows strength in capturing complex patterns in the Weather dataset, matching Koopa’s performance while surpassing other transformers, indicating effective handling of meteorological dynamics. For the ILI dataset, which features highly nonlinear epidemic patterns, SKOLR outperforms the baseline methods, with significant error reduction for the shorter horizons. While SKOLR demonstrates strong performance in longterm forecasting, we also evaluate its effectiveness on shortterm predictions with M4 dataset. Results in Appendix B.1 show consistent improvements over both transformer-based forecasting methods and Koopman-based alternatives across different time scales. # 4.2. State Prediction for Non-Linear Systems Koopman operator-based approaches have gained attention for their ability to perform system identification in a fully data-driven manner. To evaluate SKOLR’s performance in this context, we conducted a series of experiments on nonlinear dynamical systems (NLDS) (details in Appendix E). 400 FFT Signal 200 0 Input True Pred 400 200 0 >>><>> 200 0 0 20 0 250 500 750 1000 1250 1500 Frequency Steps Table 2 demonstrates SKOLR’s effectiveness across different dynamical systems. For periodic systems like Pendulum, SKOLR achieves substantial improvements, indicating superior capture of oscillatory patterns. In chaotic systems like Lorenz $^ { , } 6 3$ , SKOLR shows better stability with $1 0 . 9 \%$ reduction on MSE, suggesting robust handling of sensitive dependence on initial conditions. The model demonstrates particularly strong performance on mixed dynamics: LotkaVolterra and Duffing oscillator. These results validate that SKOLR’s structured operator design effectively captures both periodic motions and complex nonlinear dynamics. Fig. 3 demonstrates SKOLR’s multi-scale decomposition strategy. The FFT analysis reveals how different branches place more emphasis on some frequency bands. This natural frequency partitioning emerges from our structured Koopman design, enabling each branch to focus on specific temporal scales. The prediction visualization illustrates the complementary nature of these branches, where their combined forecasts reconstruct complex dynamics through principled superposition of simpler, frequency-specific predictions. More analysis can be found in Appendix D.3. # 4.3. Analysis and Ablation Study # 4.3.1. ANALYSIS: STRUCTURED KOOPMAN OPERATOR We analyze the impact of branch configurations through two controlled experiments: (1) Fixed parameter count scenario, where total parameters remain constant $( { \sim } 1 . 6 \mathrm { M } )$ across configurations while varying the learnable frequency decomposition w, with $N$ branches and lookback window $L$ ; (2) Fixed dimension scenario: We maintain constant Koopman operator approximation dimension $\mathrm { d i m ( \widehat { K } ) } = 5 1 2$ while varying branch number $N$ . As $N$ increasebs, each branch’s dimension $D$ decreases proportionally $( D = 5 1 2 / N )$ , leading to reduced parameter count. We conduct experiments on the ETTm1 dataset. As we can see in Table 3 and Fig. 4, maintaining similar parameter counts $( \sim 1 . 6 { \mathrm { M } } )$ and increasing branch numbers from $N = 1$ to $N = 1 6$ improves performance for most horizons. Figure 4: MSE comparison on ETTm1 dataset across different branch configurations and prediction horizons. Bars show MSE values for each configuration. All configurations maintain similar parameter counts $( \sim 1 . 6 \mathbf { M } )$ . Increasing branch number improves performance. Table 3: Performance comparison with similar parameter counts on ETTm1 More significantly, when keeping $\dim ( { \widehat { \mathrm { K } } } ) = 5 1 2$ (Table 4), models with more branches maintain stbrong performance despite substantial parameter reduction. Notably, the configuration with $D = 3 2 , N = 1 6$ achieves comparable performance to the 1-branch model while using only $0 . 2 5 \mathrm { { M } }$ parameters $8 5 \%$ reduction). This demonstrates that structured decomposition through multiple branches enables significantly more efficient parameter utilization while maintaining or improving forecasting accuracy. # 4.3.2. ABLATION: IMPACT OF FREQUENCY DECOMPOSITION The improved performance with multiple branches motivates further analysis of our learnable frequency decomposition strategy. In Equation 9, the learnable matrix w enables adaptive frequency allocation across branches, in contrast to uniform decomposition $\mathbf { \bar { w } = 1 }$ ). Table 5 demonstrates that this learnable approach consistently outperforms uniform allocation across prediction horizons. This adaptive capability is particularly beneficial for datasets with complex temporal patterns (Weather, ECL), where different frequency bands may carry varying importance at different time scales. The learned masks show distinct patterns across datasets, suggesting that the model successfully adapts its frequency decomposition strategy based on the underlying data characteristics. Notably, on the ETTh1 dataset, learnable decomposition occasionally underperforms uniform masking, particularly at shorter horizons $( T = 4 8 )$ . This suggests potential overfitting on smaller datasets. Table 4: Performance comparison with $\mathrm { d i m } ( \widehat { \mathrm { K } } ) = 5 1 2$ and varying branch numbers $N$ on ETTm1. Parabmeter counts shown for horizon $T = 1 9 2$ . Table 5: Ablation Study on frequency decomposition # 4.4. Model Efficiency To demonstrate the computational efficiency of SKOLR, we analyze the model complexity in terms of parameter count, GPU Memory and Running Time. We compare these values with several baseline models on the ETTm1 and weather dataset with sequence length 96 and prediction length 48. Fig. 5 demonstrates SKOLR’s computational advantages. On ETTm1, SKOLR achieves the best MSE while using only 3.31 MiB GPU memory. The training speed is also notably faster than other methods. On Weather dataset, SKOLR maintains competitive accuracy, while using significantly less memory and training $4 \mathrm { x }$ faster compared to the best. This exceptional efficiency-performance trade-off stems from our structured linear operations in Koopman space, avoiding the quadratic complexity of selfattention while maintaining modeling capacity through parallel branch architecture. The computational efficiency for all datasets can be found in Appendix C. 0.20 12.0DsL,i n1e8a.2rMB 0.18 10A9ust,of1o3r3m9eMrB 14.9s, 26.3MB MICN DLinear 39.8s, 42.3MB MICN 5.5s, 20.1MB 0.28 8.8sS,K3O.L3R1MB 32P.0ats,c h2T2S0TMB 52.8Ks,oo3p1.a9MB 85.9s, 658MB T 25.1s,S1K4O2L.R6MB Koopa 97T.i5ms,e3s3N7etMB 11P4as,tc1h1T8S7TMB 55.9s, 38.3MB 0.26 0.10 0 20 40 60 80 100 0 20 40 60 80 100 120 Computation Time per Epoch (s) Computation Time per Epoch (s) # 5. Related Work # 5.1. Koopman Operator-based Time-series Forecasting Koopman theory (Koopman, 1931) has been applied for modeling and analyzing complex dynamical systems for decades (Mezi´c, 2005; Brunton et al., 2022). The major advantage of the Koopman operator is that it can represent the dynamical system in the form of a linear operator acting on measurement functions (observables). However, learning the operator is challenging because it has infinite dimension. Researchers strive to develop effective strategies for performing finite dimensional approximations; key to this is the selection of good measurement functions. To address this, recent work has explored neural networks for learning the mapping and the approximate operator simultaneously (Li et al., 2017; Lusch et al., 2018; Takeishi et al., 2017; Yeung et al., 2019). Three recent works address time-series forecasting using Koopman operators. K-Forecast (Lange et al., 2021) uses Koopman theory to handle the nonlinearity in temporal signals and proposes a data-dependent basis for long-term timeseries forecasting. By leveraging predefined measurement functions, KNF (Wang et al., 2023b) learns the Koopman operator and attention map to cope with time-series forecasting with changing temporal distributions. Koopa (Liu et al., 2023) introduces modular Koopman predictors that separately address time-variant and time-invariant components via a hierarchical architecture, using learnable operators for the latter and eDMD (Williams et al., 2015) for the former. These prior works rely on hierarchical architectures or complex spectral decompositions to approximate Koopman operators. Our work takes a different approach, drawing a connection with linear RNNs, paving the way to a very efficient and simple forecasting architecture. Our results demonstrate that this strategy leads to improved accuracy with reduced computational overhead and memory. Although Orvieto et al. (2023) provided insights into the potential connections between the Koopman operator and a wide ${ \mathrm { M L P } } + { \mathrm { ~ } }$ linear RNN for representing dynamical systems, this was not the primary focus of their work, and they did not provide equations demonstrating the connection or conduct empirical verification. In this work, building on similar insights, we establish an explicit connection by deriving equations that demonstrate a direct analogy between a structured approximation of a Koopman operator and an architecture consisting of an MLP encoder combined with a linear RNN. # 5.2. Deep Learning for Time-Series Forecasting Time-series forecasting has evolved from statistical models (Makridakis & Hibon, 1997; Hyndman et al., 2008) to deep learning approaches. Previous methods used RNNs (Salinas et al., 2020; Smyl, 2020; Mienye et al., 2024) and CNNs (Bai et al., 2018; Luo et al., 2024) for their ability to capture temporal dependencies. MLP-based architectures (Oreshkin et al., 2020; Challu et al., 2023; Vijay et al., 2023; Wang et al., 2024a) also demonstrated promising performance for forecasting. Recently, transformer architectures (Nie et al., 2023; Zhang et al., 2024; Hounie et al., 2024; Ilbert et al., 2024) introduced powerful attention mechanisms, with innovations in basis functions (Ni et al., 2024) and channel-wise processing (Liu et al., 2024). To address their quadratic complexity, sparse attention variants (Lin et al., 2024) were proposed, but these often struggle with capturing long-range dependencies due to information loss from pruned attention scores. Foundation models (Das et al., 2024; Darlow et al., 2024) and unified approaches (Woo et al., 2024) have recently emerged. These attempt to mitigate the limitations through pre-training and multi-task learning, but this comes at the cost of dramatically increased architectural complexity and computational overhead. To address the complexity challenges in timeseries forecasting, recent state space models (Gu & Dao, 2023) achieve linear complexity, while physics-informed approaches (Verma et al., 2024) enhance interpretability. However, these methods often require complex architectures or domain expertise. Our approach offers a balanced solution with a principled foundation based on Koopman theory, achieving excellent prediction performance with very low computation and memory requirements.
Koopman operator theory provides a framework for nonlinear dynamical system analysis and time-series forecasting by mapping dynamics to a space of real-valued measurement functions, enabling a linear operator representation. Despite the advantage of linearity, the operator is generally infinite-dimensional. Therefore, the objective is to learn measurement functions that yield a tractable finite-dimensional Koopman operator approximation. In this work, we establish a connection between Koopman operator approximation and linear Recurrent Neural Networks (RNNs), which have recently demonstrated remarkable success in sequence modeling. We show that by considering an extended state consisting of lagged observations, we can establish an equivalence between a structured Koopman operator and linear RNN updates. Building on this connection, we present SKOLR, which integrates a learnable spectral decomposition of the input signal with a multilayer perceptron (MLP) as the measurement functions and implements a structured Koopman operator via a highly parallel linear RNN stack. Numerical experiments on various forecasting benchmarks and dynamical systems show that this streamlined, Koopman-theory-based design delivers exceptional performance.
[ "cs.LG", "cs.AI", "stat.ML" ]
# 1 Introduction Recent years have witnessed a shift from specialized models to large foundation models capable of performing a plethora of tasks, particularly in language [Touvron et al., 2023, OpenAI, 2023, Bai et al., 2023, Qwen et al., 2025, DeepSeek-AI et al., 2025]. This paradigm shift has led to the development of increasingly large models, often amounting to billions of weight parameters, trained on massive compute clusters by major high-tech companies. Downstream consumers and researchers frequently seek to deploy these models by fine-tuning them on proprietary data for specific tasks or by personalizing them. However, a significant disparity often exists between the computational resources used to train these massive models and the resources available to downstream users. Low-Rank Adaptation This discrepancy has driven the widespread adoption of parameter-efficient finetuning (PEFT) methods [He et al., 2022, Pfeiffer et al., 2020, Ding et al., 2022, Yu et al., 2022, Han et al., 2024]. Among these techniques, Low-Rank Adaptation (LoRA) [Hu et al., 2022] has become particularly widespread, arguably due to its effectiveness and simplicity. LoRA creates low-rank factors by parameterizing perturbations to pre-existing weight matrices $W$ of the form $\Delta W = B A$ where both $W$ and $\Delta W$ are, say, $d _ { o u t } \times d _ { i n }$ matrices and the low-rank factor matrices $B$ and $A$ are $d _ { o u t } \times r$ and $r \times d _ { i n }$ respectively with $1 \leq r \ll d _ { i n } , d _ { o u t }$ (thus leading to parametric efficiency). After fine-tuning, it suffices to substitute the original matrix $W$ with its updated variant: $$ W ^ { \prime } = W + \Delta W \ { \stackrel { \mathrm { d e f . } } { = } } \ W + B A . $$ This technique achieves comparable performance to full fine-tuning by recognizing and exploiting the observation that only a small fraction of a large pre-trained model’s parameters must be adjusted to adapt to a new task; at a fraction of its computational cost. Such a formulation relies on the assumption that the distribution shift between the pre-training and post-training dataset is not large, which is empirically supported in the context of language. Fine-tuning only a small subset of parameters also helps avoid issues such as catastrophic forgetting [French, 1999], ensuring the new model retains the extensive knowledge gained during pre-training by limiting its divergence from the base model. Asymmetry in Low-Rank Adapters Among the variety of LoRA techniques that have recently been proposed in the literature (see Section 2), we focus on the lightweight paradigm introduced by Zhu et al. [2024]. There, the authors demonstrate that LoRA components are inherently asymmetric. This asymmetry is evident in the typical LoRA initialization, where, for full LoRA (tuning both $A$ and $B$ ), $A$ is commonly initialized to a random Gaussian matrix (from now on referred to as a random factor ), while $B$ is initialized to zero [Hu et al., 2022]. This empirically derived procedure aligns with the distinct roles of the $A$ (feature projector) and $B$ (feature extractor ) matrices: $A$ extracts features from the input, and $B$ uses these features to create the desired output. If the goal of adaptation is to approximate a desired output, selectively projecting away a portion of the input feature space might be less damaging than projecting away a significant portion of the output space, especially when the LoRA rank $r$ is much smaller than the output dimension $d _ { o u t }$ . This is particularly true if the input features contain redundant information or exhibit a low-rank distribution. We further study this asymmetric LoRA paradigm since it is both simple and efficient (potentially leading to a $2 \mathrm { x }$ trainable parameter reduction when $d _ { i n } = d _ { o u t }$ ), see Figure 1. Figure 1: Asymmetric low-rank adapter schematic for simple linear layer and a non-linearity $\sigma$ . Note that in practice, in Large Language Models (LLMs) we fine-tune the attention matrices instead. Open Theoretical Questions In this work, we focus on providing a full theoretical characterization of the asymmetric LoRA case with frozen random factors. While Zhu et al. [2024] discusses valuable intuition and introduces initial theoretical groundwork for the asymmetry phenomenon, several important questions with practical implications for LoRA users remain open. In particular, the original paper derives only an upper bound on the average generalization gap and its conclusions only hold over several, perhaps thousands, of experiments and instantiations of random LoRA factors: they are derived for the average generalization gap, calculated over random draws of the data and the random initialization of the $A$ matrix. Hence, the first question we aim to answer is: # How rapidly does the typical LoRA generalization gap concentrate around its mean? Simply put, a positive answer would imply that the generalization gap trends hold whenever a practitioner runs a single fine-tuning experiment with a specific model and a single randomly chosen factor. Therefore, understanding the concentration of the generalization performance for a specific model around the reported average is crucial for assessing the robustness and predictability of the asymmetric LoRA method in real-world scenarios. A natural progression of inquiry leads to: What are the fundamental limits on the sample efficiency of asymmetric LoRA? After establishing a better understanding of the generalization gap (as in Q1), the next critical question is how much data the model needs to achieve that generalization: even strong generalization abilities are less useful if they require large amounts of training data. Thus, Q2 seeks to determine the fundamental limits on how efficiently asymmetric LoRA can learn in practice. Contributions We directly address the two critical open questions in the learning theory of LoRA: • Regarding Q1, our main upper bound on the sample complexity of the asymmetric LoRA paradigm of Zhu et al. [2024], derived in Theorem 1, shows that rank $r$ LoRAs achieve a sample complexity of $\begin{array} { r } { \tilde { \mathcal { O } } \left( \frac { \sqrt { r } } { \sqrt { N } } \right) } \end{array}$ (ignoring logarithmic terms), with high probability, from $N$ training samples. We deduce that although an increased rank $r$ may indeed increase expressiveness, as shown in Zeng and Lee [2024], it does so at the cost of a wider generalization gap. Our worst-case theoretical guarantees, which hold uniformly over all fine-tuning tasks and all training algorithms, are verified experimentally in Section 5. • We then turn our attention to Q2, by affirming in Theorem 2 that the typical sample complexity of $\begin{array} { r l } { \mathcal { O } \left( \frac { 1 } { \sqrt { N } } \right) } \end{array}$ is indeed optimal in terms of sample size. We do this by constructing a neural network with a random LoRA factor and a family of data-generating distributions achieving the upper bound in our first result; thus yielding a matching lower bound. We positively answer both Q1 and Q2. The theoretical analysis presented in this paper relies on a new combination of techniques which are non-standard in learning theory. These include techniques known in neural network approximation and random matrix theory for entirely different purposes. Technical Contributions Our upper bound is derived from the recent tools in constructive approximation; namely, from the theory of Lipschitz widths [Petrova and Wojtaszczyk, 2023b], which, in Petrova and Wojtaszczyk [2023a], derive sharp estimates on the local-Lipschitz regularity of the map sending parameters of a neural network to the network that those parameters realize in the space of continuous functions. That Lipschitz constant is then estimated, with high probability, using random matrix theory results [Gordon, 1992]. These estimates on the Lipschitz constant of the LoRA-factors-to-neural network map, together with classical estimates covering numbers on high-dimensional cubes (e.g. in Lorentz et al. [1996]) and Dudley entropy-integral estimates, allow us to obtain our upper-bound on LoRA generalization. For the proof of our lower bound we utilize some known constructions in the approximation theory for neural networks where we emulate identity blocks with (randomized) MLP layers. Upon doing so, we appeal to recent anti-concentration inequalities of Rudelson and Vershynin [2008] of Littlewood–Offord-type, typical in modern random matrix theory; e.g. Tao and Vu [2009, 2010] for random variables on $[ 0 , 1 ]$ , showing that our bound is tight. In either case, we integrate techniques from (universal) approximation theory of neural networks with learning theoretic and random matrix theoretic tools to derive our results. This shows how not only the results, but also that techniques previously isolated within approximation theory can have learning theoretic applications. # 2 Related Works A Zoo of LoRAs Following its introduction, numerous LoRA variants have emerged, often aiming to further reduce computational overhead. Quantization, for example, offers a way to lower memory consumption both during training [Gholami et al., 2021, Dettmers et al., 2023, Guo et al., 2024] and after [Yadav et al., 2023]. The number of trainable parameters can also be reduced through adaptive rank allocation [Zhang et al., 2023b]. Ideas around weight or projection reuse [Frankle and Carbin, 2018, Ramanujan et al., 2020] have further inspired strategies to decrease trainable LoRA parameters, such as learning diagonal rescaling of frozen random $B$ and $A$ matrices (VeRA) [Kopiczko et al., 2024], deriving $B$ and $A$ from the SVD of the pre-trained $W _ { 0 }$ and optimizing a smaller matrix in the resulting space (SVDiff) [Han et al., 2023], learning a linear combination of fixed random matrices (NOLA) [Koohpayegani et al., 2023], and fine-tuning with orthogonal matrices (BOFT) [Liu et al., 2024]. Furthermore, LoRA’s applicability has recently expanded beyond classical LLM post-training and language. For example, it has been employed in the context of vision language models [Li et al., 2023] and vision Transformers [Dong et al., 2023], image generative modeling for fast Stable Diffusion fine-tuning and personalization [Rombach et al., 2022, Gal et al., 2022, Ruiz et al., 2022, Roich et al., 2022], for score distillation [Wang et al., 2023] (although more principled LoRA-free methods have also emerged recently [Lukoianov et al., 2024]), fine-tuning base models into reasoning models using reinforcement learning [Wang et al., 2025], and even in the development of new adapters for Graph Neural Networks and Graph Transformers [Papageorgiou et al., 2025]. Previous Theoretical Works In terms of theoretical results, the approximation properties of standard LoRAs have only recently come into focus, as seen in Zeng and Lee [2024]. In comparison, the statistical properties of LoRA are better understood, with PAC-Bayesian guarantees recently derived in Lotfi et al. [2024], Liu et al. [2023], Lei et al. [2024], and guarantees in the infinite-width surrogate (NTK) limit, as detailed in Malladi et al. [2023], Jang et al. [2024]. Of particular interest for this paper is prior research indicating that freezing the $A$ matrix in standard (or vanilla) LoRA does not significantly affect performance [Zhang et al., 2023a]. Intriguingly, although almost all recent works initialize or freeze these two matrices asymmetrically, a rigorous investigation into the implications of this asymmetry in low-rank adaptation has only recently garnered theoretical attention. Finally, in spite of the fact that NTK-based analyses [Malladi et al., 2023, Jang et al., 2024] provide insight into the training dynamics and loss landscape of LoRA models, their conclusions are limited to the asymptotic (infinite-width) setting: they cannot necessarily be transferred to the real-world finite-width scenarios of interest in this work. # 3 Main Statistical Guarantees Setup We consider the generalization capacity of foundation models with low-rank randomized factors. We consider the (random) class $\mathcal { F }$ of functions $f : \mathbb { R } ^ { d } \mathbb { R } ^ { D }$ admitting the representation: $$ \begin{array} { r l } & { f ( x ) \stackrel { \mathrm { d e f . } } { = } ( W ^ { ( T + 1 ) } + \underbrace { B ^ { ( T + 1 ) } A ^ { ( T + 1 ) } } _ { \Delta W ^ { ( T + 1 ) } : \mathrm { L o R A ~ P e r t u r b a t i o n } } ) x ^ { ( T + 1 ) } + b ^ { ( T + 1 ) } } \\ & { x ^ { ( t + 1 ) } \stackrel { \mathrm { d e f . } } { = } \sigma \bullet \big ( ( W ^ { ( t ) } + \underbrace { B ^ { ( t ) } A ^ { ( t ) } } _ { \Delta W ^ { ( t ) } : \mathrm { L o R A ~ P e r t u r b a t i o n } } ) x ^ { ( t ) } + b ^ { ( t ) } \big ) } \\ & { x ^ { ( 1 ) } \stackrel { \mathrm { d e f . } } { = } x \in \mathbb { R } ^ { d } } \end{array} $$ where $W ^ { ( 1 ) } , \dots , W ^ { ( T + 1 ) }$ are pre-trained $d _ { t + 1 } \times d _ { t }$ -dimensional weight matrices and $b ^ { ( 1 ) } , \dots , b ^ { ( T + 1 ) }$ are pretrained biases $d _ { t + 1 }$ -dimensional, $B ^ { ( 1 ) } , \ldots , B ^ { ( T + 1 ) }$ are $d _ { t + 1 } \times r$ dimensional random (non-trainable) random LoRA factors, and $A ^ { ( 1 ) } , \ldots , A ^ { ( T + 1 ) }$ are $r \times d _ { t }$ dimensions trainable LoRA factors; here $d _ { 1 } = d$ , $d _ { 2 } = d _ { 3 } =$ $\mathbf { \partial } \cdot \cdot \mathbf { \partial } \cdot = d _ { T } = W$ , and $d _ { T + 1 } = D$ and $t = 1 , \dots , T$ . Remark 1 (Symmetry in Upper-Bound). Our main upper-bound (Theorem 1) remains valid if the $B$ factors are trainable and the $A$ factors are instead randomized, as in Zhu et al. [2024]. We consider a 1-Lipschitz loss function $\ell : \mathbb { R } ^ { D } \times \mathbb { R } ^ { D } [ 0 , 1 ]$ and i.i.d. training data $\{ ( X _ { n } , Y _ { n } ) \} _ { n = 1 } ^ { N }$ (independent of the random LoRA factors $\{ B ^ { ( t ) } \} _ { t = 1 } ^ { ( T + 1 ) } )$ drawn from a data-generating distribution $\mathbb { P }$ on $\mathbb { R } ^ { d } \times \mathbb { R }$ Our objective is to compute a uniform generalization bound between the empirical risk $\mathcal { R } ^ { N } ( f )$ and the true risk $\mathcal { R } ( f )$ for any (random) learner $f$ in the (random) class $\mathcal { F }$ ; defined by $$ \displaystyle \mathcal { R } ^ { N } ( f ) \stackrel { \scriptscriptstyle \mathrm { d e f . } } { = } \frac { 1 } { N } \sum _ { n = 1 } ^ { N } \ell ( f ( X _ { n } ) , Y _ { n } ) \mathrm { ~ a n d ~ } \mathcal { R } ( f ) \stackrel { \scriptscriptstyle \mathrm { d e f . } } { = } \mathbb { E } _ { ( X , Y ) \sim \mathbb { P } } \big [ \ell ( f ( X ) , Y ) \big ] . $$ We emphasize that, unlike classical learning theoretic results, our function class $\mathcal { F }$ is random. We want to derive a high probability bound on the worst-case, uniformly over all data-generating distributions and all training algorithms mapping training data to a learner. This randomized generalization gap is $$ \begin{array} { r } { \mathbf { G } \stackrel { \mathrm { d e f . } } { = } \operatorname* { s u p } _ { f \in \mathcal { F } } \big | \mathcal { R } ( f ) - \mathcal { R } ^ { N } ( f ) \big | . } \end{array} $$ We highlight that, unlike classical PAC-learning guarantees, the true risk is a random object, as it depends on the randomness in the LoRA factors, rather than being deterministic. Preliminaries and Notation Before stating our main results, we must first rigorously define our randomized learners with random LoRA factors expressed in (2) at a high level. Parametric Perturbations of Multi-Layer Perceptrons Fix dimensions $d , D \in \mathbb { N } _ { + }$ , depth and a width parameters $T , W \in \mathbb { N } _ { + }$ , respectively, and an MLP $\hat { f } : \mathbb { R } ^ { d } \mathbb { R } ^ { D }$ with representation $$ { \begin{array} { r l } & { { \hat { f } } ( x | \theta ) \ { \stackrel { \mathrm { d e f . } } { = } } \ W ^ { ( T + 1 ) } x ^ { ( T + 1 ) } + b ^ { ( T + 1 ) } } \\ & { x ^ { ( t + 1 ) } \ { \stackrel { \mathrm { d e f . } } { = } } \ \sigma \bullet ( W ^ { ( t ) } x ^ { ( t ) } + b ^ { ( t ) } ) \qquad { \mathrm { f o r ~ } } t = 1 , \dots , T } \\ & { x ^ { ( 1 ) } \ { \stackrel { \mathrm { d e f . } } { = } } \ x } \end{array} } $$ where $\theta = \big ( ( W ^ { ( t ) } , b ^ { ( t ) } ) \big ) _ { t = 1 } ^ { ( T + 1 ) } \in \mathbb { R } ^ { p }$ where $\ d s = ( T - 1 ) \ d W ( W + 1 ) + ( d + 1 ) \ d W + ( D + 1 ) \ d W$ W , $W ^ { ( t ) } \in \mathbb { R } ^ { d _ { t + 1 } \times d _ { t } }$ and $b ^ { ( t ) } \in \mathbb { R } ^ { d _ { t } }$ where $d _ { t } = W$ if $t \in \{ 2 , \ldots , T \}$ , $d _ { 1 } = d$ , and $d _ { T + 1 } = D$ . We fix a parameter $\theta _ { p r e } \overset { \mathrm { d e f . } } { = } ( W ^ { ( t ) } , b ^ { ( t ) } ) _ { t = 1 } ^ { ( T + 1 ) } \in \mathbb { R } ^ { p }$ . We defined the perturbed representation map $$ \begin{array} { r l } & { P _ { T , W } ^ { \theta _ { p r e } } : \mathbb { R } ^ { p } C ( [ 0 , 1 ] ^ { d } , \mathbb { R } ^ { D } ) } \\ & { \qquad \theta \mapsto \hat { f } ( \cdot | \theta + \theta _ { p r e } ) . } \end{array} $$ LoRAs with Random Factors Fix a probability space $( \Omega , A , \mathbb { P } )$ ; all random quantities will be defined thereon. Fix random matrices $A _ { 1 } , \dotsc , A _ { T + 1 }$ of dimensions $W \times r$ for $l \leq T$ and $D \times r$ for $l = T + 1$ . We fix maximum admissible LoRA weight sizes $M \geq 0$ . We then define the (random) parameter-to-LoRA map, which maps low-rank matrices $C _ { 1 } , \ldots , C _ { T + 1 }$ and an event $\omega \in \Omega$ (random initialization of non-trainable LoRA parameters $A ^ { t }$ ) to the sequences of matrices: $$ \begin{array} { r l } & { \mathrm { L o R A : } \Omega \times [ - M , M ] ^ { q } \to C ( [ 0 , 1 ] ^ { d } , \mathbb { R } ^ { D } ) } \\ & { \qquad \quad ( \omega , ( A ^ { ( t ) } ) _ { t } ) \mapsto P _ { T , W } ^ { \theta _ { p r e } } \big ( ( B ^ { ( t ) } ( \omega ) A ^ { ( t ) } ) _ { t = 1 } ^ { ( T + 1 ) } \big ) } \\ & { \qquad \quad = \hat { f } ( \cdot | ( B ^ { ( t ) } ( \omega ) A ^ { ( t ) } ) _ { t = 1 } ^ { ( T + 1 ) } + \vartheta _ { p r e } ) } \end{array} $$ where the effective number of trainable LoRA parameters is defined to be $$ q \stackrel { \mathrm { d e f . } } { = } r ( W ( T - 1 ) + d + D ) \in { \mathcal { O } } ( r ) . $$ Notation We now close our preliminaries section with the following table aggregating the notation used in formulating our main result within the main body of our paper. Table 1: Notation used in the main body of our manuscript. Henceforth, we always operate in the following setting: Setting 1. Fix $W , T , r , d , D \in \mathbb { N } _ { + }$ with $1 \leq r < W$ . We fix a continuous activation function $\sigma : \mathbb { R } \mathbb { R }$ which is either a bounded Lipschitz or the ReL $U$ activation function. We assume there is a constant $M > 0$ such that $A _ { i j } ^ { ( t ) } < M$ for all $t , i , j$ throughout the LoRA training. # 3.1 Main Upper-Bound on LoRA Generalization Gap Our main theorem is the following randomized generalization bound. Theorem 1 (LoRA Sample Complexity: Upper-Bound). In setting 1: for every failure probability $0 < \delta \leq 1$ , the following holds with probability at least $1 - \delta$ $$ \mathbf { G } \leq 4 \operatorname* { m i n } \left\{ 1 , \sqrt { q } \frac { 6 \sqrt { A } } { \sqrt { N } } \right\} + \sqrt { \frac { 8 \log ( 2 / ( 1 - \sqrt { 1 - \delta } ) } { N } } $$ where $A = ( c T + 1 ) \log ( 2 R + 2 R _ { 0 } )$ , $R = M \nu \sqrt { 2 r \log ( 2 W / \epsilon ) }$ and $R _ { 0 } = \| \theta _ { p r e } \| _ { \infty }$ Remark 2 (Symmetry in the Upper Bound - Randomizing $B$ vs. $A$ ). In Zhu et al. [2024], the authors train $B$ and randomize-then-freeze $A$ during training. Our upper bound in Theorem 1 applies equally to this setup, and the reader may draw analogous conclusions if the roles of $A$ and $B$ are reversed. This is uncovered within the details of our proof of Theorem 1. As a function of $N$ , our main generalization bounds for random LoRAs are generally tight and cannot be improved. Our second supporting result confirms this by exhibiting a data-generating distribution for a classification problem where our bound is sharp over the random LoRA class, assuming the random LoRA factors are standard Gaussian matrices as in Zhu et al. [2024]. # 3.2 Matching Lower-Bound on LoRA Generalization Gap The following is a simple sufficient condition which is enough to demonstrate the optimality of the rate in sample complexity estimate (Theorem 1), as a function of the same size $N$ . We underscore that the identification of necessary conditions for optimality of our rate, in the sample size $N$ , is an interesting question in its own right; yet it is tangential to our paper’s primary objective. Assumption 1 (A Sufficient Condition of Optimality). Let $d = 1$ , $\mathbb { P } = \mathbb { P } _ { X } \otimes \delta _ { 0 }$ , such that the “sampling distribution” $\| ^ { p } _ { X }$ is supported on [0, 1] such that, if $X \sim \mathbb { P } _ { X }$ , has mean and variance $1 / 2$ . Assumption 1 is non-vacuous, with a simple example establishing existence being a standard fair Bernoulli trial. Though, naturally, there is a plethora of more involved examples one can easily construct by dabbling. Example 1 (Bernoulli Trial). Any fair Bernoulli trial $\mathbb { P } _ { X } \left( X = 1 \right) = \mathbb { P } _ { X } \left( X = 0 \right) = 1 / 2$ satisfies Assumption 1. Moreover, $\xi \ { \overset { \underset { \mathrm { d e f . } } { } } { = } } \ 2 X - 2$ is a Rademacher random variable (random sign). Theorem 2 (Optimality of the Rate In The Sample Size $\Theta ( N )$ ). Let $d _ { 0 } ( = d ) = d _ { T + 1 } \stackrel { \mathrm { d e f . } } { = } 1$ . Consider the loss function $\ell : \mathbb { R } ^ { 2 } \to [ 0 , \infty )$ defined by $\ell ( \hat { y } , y ) \stackrel { \mathrm { d e f . } } { = } | \hat { y } - y |$ and suppose that each MLP in the class $\mathcal { F }$ represented by (4) satisfies the following minimum width requirement on its hidden layers $$ \eta _ { \star } \stackrel { \mathrm { d e f . } } { = } \operatorname* { m i n } _ { t = 1 , \ldots , T } \sqrt { d _ { t + 1 } } - \sqrt { r } > 0 . $$ Given any data-generating probability distribution $\mathbb { P }$ satisfying Assumption 1, if the entries of the random LoRA factors $\{ B ^ { ( t ) } \} _ { t = 0 } ^ { T }$ are i.i.d. standard normal then: there is an absolute constant $c > 0$ such that for every $2 ( T + 1 ) e ^ { - c \eta ^ { 2 } } < \delta < 2 ( T + 1 )$ there is an $M > 0$ (required for the LoRA parameterization (6)) large enough such that $$ \mathbb { P } \Bigg ( \operatorname* { s u p } _ { f \in \mathcal { F } } \left| \mathcal { R } ( f ) - \mathcal { R } ^ { N } ( f ) \right| > \frac { 1 } { N } \Bigg ) \geq \quad \overbrace { \Big ( 1 - \frac { \delta } { 2 } \Big ) } ^ { \theta } \quad \overbrace { \Big ( 1 - \Theta \Big ( \frac { 1 } { \sqrt { N } } \Big ) \Big ) } ^ { \Big ( 1 - \theta \Big ) } . $$ Moreover, “large enough” value of $M$ means that $M \in \Theta \Big ( 1 / \big ( \sqrt { d _ { 1 } } - \sqrt { d _ { T + 1 } } - \ln ( 2 T / \delta ) \big ) \Big ) .$ Remark 3 (The Asymmetry in the Lower Bound - Randomizing $B$ vs. $A$ ). In contrast to Remark 2, our matching lower bound in Theorem 2 critically relies on the randomization of the extractor $B$ . This is because $B$ tends to be invertible with high probability, by Gordon’s Theorem (see, e.g., [Vershynin, 2010, Exercise 7.3.4]); in essence, $\eta ^ { \star }$ in (8) can be bounded away from 0. In contrast, randomizing the feature projector $A$ yields only a vacuous lower bound on its smallest singular value via Gordon’s Theorem, preventing any conclusion about training $B$ “cancelling out” $A$ ; in essence, $\eta ^ { \star }$ in (8) cannot be bounded away from 0. We now provide an overview of the core ideas behind the proof of our upper bound. All additional details are relegated to our paper’s appendix and the proof of the lower bound. # 3.3 Implications for LoRA End-Users It is precisely Lemma 4 that suggests why LoRAs with a single random factor generalize better than LoRAs with both trainable factors. Namely, if both $A$ and $B$ , in (1), are trainable then the Lipschitz constant of the parameter-to-LoRA map is defined as the composition of the map $$ B \mapsto A B $$ and the map sends a set of neural network parameters to its realization. If only $B$ is trainable, then the map (9) is linear meaning that its derivative is of a constant order on the scale of the largest singular value of $A$ . Now, if both and $A$ and $B$ are trainable then the parameter-to-LoRA map is instead pre-composed by $$ ( A , B ) \mapsto A B $$ which is quadratic. Thus, its derivative grows linearly and is, in particular, unbounded. Consequently, the resulting Lipschitz constant of the parameter-to-LoRA map (1) would be significantly larger. Since every downstream estimate in Lemmata 8-9 scales as a function of this constant, then training both factors seems to yield significantly larger covering numbers of the class of (random) LoRA learners and consequently larger (random) generalization bounds. This aligns with the information-theoretic bounds derived in [Zhu et al., 2024] (Lemma 4.5) and ultimately reinforces the practical guideline that, given a parameter budget, freezing $A$ and doubling the rank while fine-tuning only $B$ is preferable to distributing trainable parameters across lower-rank $B$ and $A$ matrices. Note that while this argument is technically symmetric in $A$ and $B$ (both in terms of the Lipschitz constant reasoning and the results in [Zhu et al., 2024]), the parameter budget is typically allocated to the $B$ matrix due to its role as a feature projector. # 4 Explanation via Proof Strategy for Upper-Bound We recall that for any $\varepsilon > 0$ the covering number of a, defined rigorously in B.1, of a set if a metric space is simply the number of balls of radius $\varepsilon$ required to cover every point in that set. We begin by first quantifying the generalization bounds of MLPs induced from their parameter spaces. Though this may yield loose bounds in general, due to inherent symmetries of MLPs $^ { 1 }$ this seems to be the most natural approach for LoRAs, which operate directly on the MLP parameter space. Step 1 - Probabilistic Bounds on Lipschitz Regularity of Randomized LoRA Class We begin by controlling the random Lipschitz constant of our random parameter-to-LoRA map, defined in 6. oTfhtehnew(irtahnpdromba)biLloitRyAa mleapsti $1 - \epsilon$ i(sovaet-rmthoes random initialization on $( B ^ { ( t ) } ) _ { t } )$ , the Lipschitz constant $L _ { L o R A } ^ { \theta _ { p r e } }$ $$ L _ { L o R A } ^ { \theta _ { p r e } } \leq 2 ^ { c _ { 2 } T } \big ( \underbrace { M \nu \sqrt { 2 r \log ( 2 W / \epsilon ) } } _ { R } + \underbrace { { \| \theta _ { p r e } \| _ { \infty } } } _ { R _ { 0 } } \big ) ^ { c _ { 2 } T } . $$ where $\nu > 0$ is the random initialization scale of $B _ { i j } ^ { ( t ) } \sim \mathcal { N } ( 0 , \nu ^ { 2 } )$ . Step 2 - Probabilistic Bounds on Covering Number of Randomized LoRA Class Having obtained a probabilistic estimate of the Lipschitz constant for our random LoRA parameterization map, we derive a probabilistic bound on the covering number of the associated randomized function class; conditioned on the event depending on the draw of the randomized LoRA parameters: $$ \mathcal { B } _ { \epsilon } \overset { \mathrm { d e f . } } { = } \left\{ \omega \in \Omega : L _ { L o R A } ^ { \theta _ { p r e } } \leq 2 ^ { c _ { 2 } T } \bigl ( M \nu \sqrt { 2 r \log ( 2 W / \epsilon ) } + \| \theta _ { p r e } \| _ { \infty } \bigr ) ^ { c _ { 2 } T } \right\} $$ The normed space under consideration is typically the parameter space $\mathbb { R } ^ { p }$ , where the finite collection of model parameters is flattened and concatenated into a single vector. Unless otherwise stated, the norm used is the $\infty$ -norm. We begin with a minor extension—requiring only a brief comment—of a classical covering estimate for balls in normed spaces; see, for instance, [Lorentz et al., 1996, Proposition 15.1.3]. The integer $p$ denotes the total number of parameters defining the LoRA mapping. Lemma 2 (Covering Number Bounds: Random LoRA Class). Under suitable assumptions, for each random initialization $\omega \in \Omega$ on the non-trainable parameters $( B ^ { ( t ) } ) _ { t = 1 } ^ { T + 1 }$ , define the LoRA function space $\mathcal { F }$ equipped with the metric induced by the uniform norm $\| \cdot \| _ { \infty }$ : $$ \mathcal { F } ^ { \mathrm { \tiny ~ d e f . } } \mathcal { F } ( \omega , M ) = \{ L o R A ( \omega , ( A ^ { ( t ) } ) _ { t = 1 } ^ { T + 1 } ) \in C ( [ 0 , 1 ] ^ { d } , \mathbb { R } ^ { D } ) : | A _ { i j } ^ { ( t ) } | \leq M , \ \forall t , i , j \} . $$ Then, for every $\epsilon , \varepsilon > 0$ the $\varepsilon$ -covering number $\mathcal { N } ( \varepsilon , \mathcal { F } )$ of $\mathcal { F }$ satisfies $$ \mathbb { P } \Big ( \mathcal { N } ( \varepsilon , \mathcal { F } ) \leq \big ( ( 2 R + 2 R _ { 0 } ) ^ { c T + 1 } / \varepsilon \big ) ^ { q } \Big ) \geq \mathbb { P } ( \mathcal { B } _ { \epsilon } ) \geq 1 - \epsilon $$ where $c = c _ { 2 }$ , $R = M \nu \sqrt { 2 r \log ( 2 W / \epsilon ) }$ and $R _ { 0 } = \| \theta _ { p r e } \| _ { \infty }$ as in Lemma $g$ . Step 3 - Generalization Bounds Conditioned on Covering Number Bounds We control the randomized generalization gap $\mathbf { G }$ conditioned on the event $B _ { \epsilon }$ , as defined in (16). Upon conditioning on the right realizations, which occurs with high probability, our covering number bounds give us access to Dudley’s integral estimate; see e.g. [van der Vaart and Wellner, 1996, Corollary 2.2.8]. Lemma 3 (Conditional Generalization Bounds for “Derandomized” LoRAs). Under suitable assumptions, let $G \ { \stackrel { \mathrm { d e f . } } { = } } \ \operatorname* { s u p } _ { f \in { \mathcal { F } } } \left| { \mathcal { R } } ( f ) - { \mathcal { R } } ^ { N } ( f ) \right|$ be the generalization error. The following holds $$ \mathbb { P } \Big ( \mathbf { G } \le 4 \operatorname* { m i n } \big \{ 1 , \sqrt { q } 6 \sqrt { A } / \sqrt { N } \big \} + \sqrt { 8 \log ( 2 / \epsilon ) } / \sqrt { N } \big | \mathcal { B } _ { \epsilon } \Big ) \ge 1 - \epsilon $$ Theorem 1 can be deduced from here by lower-bounding $\mathbb { P } ( \mathbf { G } \leq G ^ { \star } )$ upon conditioning on $B _ { \varepsilon }$ (which happens with probability at-least $\varepsilon$ ). # 5 Experimental Validation We now verify our results experimentally, to see if the general worst-case trend which our theory predicts is indeed reflected in practice. We remind the reader that our results are, very much, of a worst-case nature—both distributionally agnostic and algorithmically agnostic. Thus, even if we do expect to see the general pattern that as the LoRA rank $r$ grows, the generalization gap increases, we do not exactly expect the square-root rate of $\tilde { \mathcal { O } } ( \sqrt { r } )$ (ignoring the effect of $N$ ) to manifest; rather, we expect that the generalization gap grows at a slower rate than the absolute worst-case over all algorithms and data-generating distributions. CLIP-LoRA CLIP is a pretrained contrastive model that aligns image and text representations in a shared embedding space, enabling zero-shot and few-shot transfer across a variety of visual classification tasks. In our experiments, we apply LoRA on CLIP for downstream image classification on standard datasets as in Zanella and Ben Ayed [2024]. The downstream classifier, or predictor $f$ , is the LoRA-augmented CLIP model fine-tuned using cross-entropy loss. We observe that the generalization gap $\mathbf { G }$ generally increases with the LoRA rank $r$ , suggesting that higher parameterization yields improved training performance but reduced generalization. See Appendix A for experiment details. Figure 2: Generalization gap on different datasets in downstream classification tasks. While this trend qualitatively aligns with theoretical predictions about overparameterization and generalization, the rate of increase in $\mathbf { G }$ with respect to $r$ is not as sharp as suggested by the theoretical upper bounds in Theorem 1. This discrepancy is expected, as such bounds are typically agnostic to the specific optimization algorithm and dataset used, and therefore tend to be loose in practice.
Low-Rank Adaptation (LoRA) has emerged as a widely adopted parameter-efficient fine-tuning (PEFT) technique for foundation models. Recent work has highlighted an inherent asymmetry in the initialization of LoRA's low-rank factors, which has been present since its inception and was presumably derived experimentally. This paper focuses on providing a comprehensive theoretical characterization of asymmetric LoRA with frozen random factors. First, while existing research provides upper-bound generalization guarantees based on averages over multiple experiments, the behaviour of a single fine-tuning run with specific random factors remains an open question. We address this by investigating the concentration of the typical LoRA generalization gap around its mean. Our main upper bound reveals a sample complexity of $\tilde{\mathcal{O}}\left(\frac{\sqrt{r}}{\sqrt{N}}\right)$ with high probability for rank $r$ LoRAs trained on $N$ samples. Additionally, we also determine the fundamental limits in terms of sample efficiency, establishing a matching lower bound of $\mathcal{O}\left(\frac{1}{\sqrt{N}}\right)$. By more closely reflecting the practical scenario of a single fine-tuning run, our findings offer crucial insights into the reliability and practicality of asymmetric LoRA.
[ "stat.ML", "cs.AI", "cs.LG", "cs.NE", "math.ST", "stat.TH" ]
# I. INTRODUCTION Go’s rising popularity in cloud, infrastructure, and blockchain applications raises security concerns due to its unique runtime and concurrency features (goroutines, channels). Reports indicate over $6 6 \%$ of Go modules contain vulnerabilities [4], highlighting the critical need for robust security analysis. Challenges include unsafe memory operations, pointer misuse, complex error handling, and the unpredictable nature of garbage collection and goroutine management. These factors, compounded by the intricacies of Go’s concurrency model, raise the question of how to effectively identify and mitigate vulnerabilities in such a complex environment. Contributions. To address these challenges, this paper proposes an innovative approach using concolic execution for the security verification of Go programs. Our framework leverages P-Code as an intermediate representation to model with granularity Go’s execution semantics. This work improves security verification by: • Proposing a new approach for generating and parsing PCode outside of Ghidra [5] [6], including examples of test programs and their corresponding P-Code files. Proposing a novel concolic execution method to uncover common vulnerabilities in Go, and other languages with binaries convertible to P-Code, such as C. Implementing an open-source Proof-of-Concept of the concolic execution method and validating it with a custom dataset of binaries and encoded vulnerabilities. This paper begins by overviewing techniques for securing Go code and their corresponding tools, followed by a comprehensive presentation of our contributions. It then details our proposed methodology, presents our evaluation results, reviews related work, and concludes with a summary of our contributions and potential future research directions. # II. BACKGROUND: SECURING GO CODE Security tools for Go address vulnerabilities using diverse methods, each with distinct strengths and limitations. Static analysis tools such as gosec [7], go-vet [8], staticcheck [9], and errcheck [10] perform analyses to detect common issues like unchecked errors and unsafe pointer usage. While these tools integrate well into continuous integration workflows, their semantic checks can miss deeper vulnerabilities. Dependency-focused tools like Snyk [11] detect known vulnerabilities within third-party modules by continuously scanning dependency graphs against vulnerability databases, but they can overlook deeper application-specific logic errors. In contrast, CodeQL [12] provides a query-based static analysis framework supporting sophisticated data-flow analyses to detect complex vulnerabilities. Nevertheless, its effectiveness requires significant setup effort and query-writing expertise. Dynamic analysis tools, notably go-fuzz [13] and Google’s gofuzz [14], use randomized inputs to uncover runtime bugs such as buffer overflows or panics. Although effective at revealing edge cases, their limited path exploration can restrict their capability to identify state-dependent logical errors. Black-box frameworks like gopter [15] enhance fuzz testing by enabling stateful property-based testing; however, their lack of white-box insight can limit their effectiveness. Specialized tools like krf [16] and on-edge [17] specifically target vulnerabilities related to Go’s defer, panic, and recover, but their narrow scope and infrequent updates reduce their effectiveness against evolving threats. Existing tools address specific vulnerability classes but can remain insufficient for detecting complex flaws. # III. OVERVIEW OF THE CONTRIBUTIONS # A. Core contributions Zorya is a concolic execution framework designed to detect logic-related bugs, language-specific vulnerabilities, and uncover new patterns of security issues, primarily in Go binaries. As illustrated in Fig. 1, the analysis begins by generating CPU registers and memory dumps using gdb [18] at a user-specified address. Zorya then loads these dumps to initialize execution from the given starting point, ensuring a realistic and accurate representation of the program state. The core methodology involves translating binary code into raw P-Code, a low-level intermediate representation, which is subsequently parsed for precise execution path analysis. Zorya focuses on key aspects such as targeting the sections containing executable code in studied binaries, supporting full binaries with runtime components, and accommodating shared libraries. Due to the lack of a complete Rust-based library for P-Code parsing, Pcode-parser was implemented from scratch, improving upon incomplete alternatives like sleigh-rs [19]. Zorya’s engine, implemented in Rust, uses the Z3 SMT solver [20] and includes a state manager, a CPU state, a memory model, and a virtual file system. It emulates PCode instructions (e.g. handle_int_add, handle_load) to track the execution and detect vulnerabilities in the analyzed binaries. Zorya supports both concrete and symbolic data types, $\phantom { 0 } { \times 8 6 - 6 4 }$ instructions and syscalls, and manages the program counter. Currently, Zorya analyzes single-threaded Go programs compiled with TinyGo, with plans to address multithreading and goroutines in future work. # B. Proof-of-Concept Functionalities Zorya’s current Proof-of-Concept demonstrates several advanced functionalities essential to effective concolic execution analysis. It notably facilitates precise concolic handling of jump tables, which are specialized switch table constructs replacing traditional binary searches with direct jumps for consecutive numeric labels (see jump_table.json). Furthermore, it can systematically identify and document crossreference addresses leading directly to panic functions embedded within Go target binaries, considerably aiding in targeted vulnerability assessments (see xref_addresses.txt). Additionally, Zorya is proficient in translating dynamically loaded executable sections of shared libraries, such as libc.so and $1 { \mathsf { d } } { \mathsf { - } } 1 { \mathsf { i } } \operatorname { n u x } { \mathsf { - } } \mathbf { x } 8 6 { \mathsf { - } } 6 4 \ . { \mathsf { s o } }$ , into P-Code, providing robust analysis capabilities for dynamically linked binaries. The current Proof-of-Concept implementation of Zorya also generates comprehensive execution logs, recording stepby-step instruction-level details for thorough analysis (see execution_log.txt). Additionally, Zorya systematically captures the executed symbols, including functions and their arguments’ values, facilitating the tracking and recording of the effective execution (see execution_trace.txt). These detailed insights significantly enhance the ability to reconstruct execution paths, identify potential execution bottlenecks, and debug concolic execution processes. Fig. 1. Overview of the contributions, including Zorya, Pcode-generator and Pcode-parser. # IV. METHODOLOGY # A. Concolic execution and Go analysis Symbolic execution treats program variables as symbolic variables to explore all possible execution paths, while concrete execution runs the program with actual input values. Dynamic symbolic execution (DSE) benefits from the efficiency and decidability of concrete execution and the stronger guarantees of symbolic execution. Concolic execution, is a specific type of DSE that uses concrete execution to drive symbolic execution, building symbolic representations while executing with concrete values. This approach helps in generating new concrete inputs (test cases) to maximize code coverage and find bugs in real-world software. By running a program with real input values while simultaneously treating certain inputs as symbolic variables, concolic execution mitigates the exponential path growth encountered in purely symbolic approaches. This makes it more practical for analyzing large binaries, such as Go-Ethereum’s Geth, which has a size of $7 0 ~ \mathrm { M B }$ when unoptimized [21]. However, few symbolic or concolic execution tools can effectively analyze Go programs. This limitation arises primarily from their lack of support for multithreading and system calls, which are prevalent in Go’s non-deterministic runtime. Table I indicates that radius2 and MIASM are among the few tools offering basic compatibility with Go. Zorya aims to provide the most adapted concolic execution framework for Go analysis. Accepted in the 23rd IEEE/ACIS International Conference on Software Engineering, Management and Applications (SERA 2025) TABLE I COMPARING Zorya TO EXISTING SYMBOLIC-EXECUTION-BASED TOOLS (SE: SYMBOLIC EXECUTION / CE: CONCRETE EXECUTION) # B. $P$ -Code as Intermediate Representation P-Code, Ghidra’s intermediate representation language, offers significant advantages for this research due to its integration with the robust disassembly framework released by the NSA in 2017. Compared to alternatives such as LLVM IR (with the not fully maintained gollvm compiler [22]), VEX [23], BIL [24], or REIL [25], P-Code provides a granular abstraction that combines low-level semantic detail with structural analyzability, and is tightly coupled with Ghidra’s lifting pipeline. Other formats, such as WASM, are rarely used in backends or blockchain clients, limiting their relevance in this context. Similarly, performing symbolic execution directly on $\mathbf { \boldsymbol { x } } 8 6$ machine code lacks the abstraction and maintainability offered by an IR. While P-Code itself refers to the low-level IR generated by Ghidra’s Sleigh specifications, the decompiler constructs a higher-level representation on top of it, exposing control-flow, data-flow, and recovered variables. Although not formally a separate IR, this higher-level view is often used for sourcelike analysis [26]. In this work, we make the novel choice to operate directly on low-level P-Code, as its finer granularity preserves detailed instruction semantics—an essential property for precise symbolic reasoning on optimized or semantically intricate binaries. In addition, Ghidra’s dedicated Go lifter [27] improves the accuracy of analysis by preserving language-specific features such as runtime metadata, goroutines, and channels. Custom Java scripts further extend Ghidra’s capabilities for Go-specific inspection and analysis [28]. # C. Bugs detection Several strategies, optimized for Go binaries, can be implemented in Zorya to detect bugs effectively. The first strategy (S1) employs concrete execution combined with a flag-raising mechanism. This mechanism monitors the execution flow and triggers a signal when the program approaches the invocation of a panic function, as identified through the symbol list. This indicates that an error-inducing branch has been encountered. The second strategy (S2) integrates both concrete and symbolic execution (concolic execution). This approach utilizes a Z3- based symbolic invariant defined as: ”The program counter must never point to an address that is a cross-reference to a panic function.” This invariant ensures systematic exploration of paths to verify whether any execution violates the constraint, identifying potential vulnerabilities. For example, this invariant prevents crashes from nil pointer dereferences, a common cause of denial-of-service vulnerabilities. The third strategy (S3) focuses on targeted concolic analysis of specific functions. Zorya initiates execution at the function’s address, with its arguments populated by symbolic variables and randomized concrete values. This hybrid approach allows for guided execution while verifying the satisfiability of custom invariants. These strategies are designed to be complementary. By default, at each execution of Zorya on a binary, strategy (S1) is enabled. The analysis can then proceed by examining the entire binary starting from the start or main address (strategy (S2)), or by initiating execution at the address of a specific function (strategy (S3)). # V. RUNNING EXAMPLE Listing 1 presents a minimal Go code where assigning to a nil map triggers a runtime panic. Detecting this bug with Zorya involves the following steps. First, the code is compiled with TinyGo, and the resulting binary is translated to P-Code. Next, Zorya executes the P-Code, starting from the main.main address. During execution, Zorya updates its concolic state at each P-Code instruction. Since the bug is embedded in the code, the first bug detection strategy (S1) described in section IV is employed. This strategy identifies the runtime panic at address $0 \mathtt { x } 2 0 3 4 \mathtt { c } 5$ . Specifically, when the program counter reaches this address, Zorya raises a flag, halts the analysis, and reports an attempt to ”add an entry to a nil map.” This example, along with detailed evaluation results, is available in the Zorya-evaluation repository\*. Listing 1. Example of a Go program attempting to assign to a nil map. # VI. EVALUATION In this section, we evaluate our approach based on the following research questions (RQ): RQ1: Could P-Code be generated and used outside of Ghidra’s framework? RQ2: How does our method compare to existing symbolic or concolic execution approaches in terms of Go bugs detection? RQ3: Can our method be used for other languages bugs detection? All our experiments are conducted on a 64-bit machine with Linux, Ghidra 11.0.3, TinyGo v0.33.0 and gcc v11.4.0. # RQ1: Generating and Using P-Code Outside Ghidra’s Framework. Ghidra’s lack of an API for external P-Code generation required developing custom tools. Using its Java classes, we built the Pcode-generator to extract and save low- or highlevel P-Code. Key challenges included symbol mapping from .text and .rodata sections and implementing a low-level P-Code parser aligned with Ghidra’s $\phantom { 0 } { \times 8 6 - 6 4 }$ specifications. Results (Table II) show high accuracy, with no false positives for Go binaries and minimal issues for C binaries, mainly caused by complex structures. File generation is efficient, confirming the feasibility of external P-Code use for Go and C program analysis. Complex structures or libc usage did not affect false positives. File generation takes only a few seconds, depending on binary size, ensuring efficient processing. TABLE II MEASURING ACCURACY OF THE GENERATION OF P-CODE ACCORDING TO BINARY SOURCE CODE LANGUAGE Finding 1: P-Code, generated in low or high representation, can be used outside Ghidra for Go and C programs, though limitations arise for more complex structures. # RQ2: Comparison with Existing Approaches. Table III summarizes the detection results for common Go vulnerabilities associated with TinyGo runtime panics. The assessment was conducted using a benchmark of small Proofof-Concept programs replicating widely known Go vulnerabilities, as detailed in the Zorya Evaluation repository\*. These vulnerabilities include critical issues described in the TinyGo documentation [29], such as nil pointer dereference, out-ofbounds index access, nil map assignments, excessive channel creation, and negative bit shifts. However, the three symbolic/concolic execution tools evaluated—DuckEEGO, Radius2, and MIASM—failed to detect any of these vulnerabilities. DuckEEGO, originally developed for Go 1.10, faces compatibility challenges due to significant changes introduced in modern Go versions. The adoption of Go modules mandates explicit go.mod files, breaking previous GOPATH-based dependency resolution. Additionally, stricter enforcement in the reflect package results in failures during dynamic method resolution, necessitating explicit pointer receivers and enhanced error handling. Furthermore, go build no longer supports GOPATH-only projects, requiring a module-based compilation approach. While these issues were mitigated by manually initializing Go modules, adding replace directives, and refining method lookups, these adaptations were insufficient for DuckEEGO to detect any of the vulnerabilities in our benchmark. Radius2 was tested on Go binaries, but in all cases, the analysis terminated unexpectedly at arbitrary points without providing information on execution status or conclusions. This lack of transparency hindered its practical applicability for Go vulnerability detection. MIASM requires a Python configuration file specifying the strategy for identifying target vulnerabilities. However, its detection mechanism assumes that the bug is actively triggered during execution. In the case of nil pointer dereference, MIASM expects to dereference the pointer to observe the fault. Yet, Go binaries are compiled to redirect such operations to a panic routine instead of executing the faulty instruction directly. Consequently, MIASM fails to detect these vulnerabilities as it has not been adapted to account for Go’s panic handling mechanisms. In contrast, Zorya successfully identified all vulnerabilities without false positives. Its effectiveness is attributed to its reliance on concrete execution and the first detection strategy (S1) described in section IV, which flags panics when specific program counter values are reached. Additionally, Zorya’s detection workflow is highly efficient, completing analyses in under a minute. Its simple interface (zorya <path/to/bin>) supports interactive mode, allowing users to select the starting address and define custom invariants. TABLE III COMPARISON OF GO RUNTIME BUG DETECTION ACROSS DIFFERENT METHODS (DETECTED (D) / NOT DETECTED (ND)) Finding 2: Our method demonstrates improved performance over other symbolic execution approaches in the detection of common runtime bugs in Go using the TinyGo compiler. RQ3: Extending Zorya to bug detection in C. To evaluate Zorya’s ability to analyze binaries beyond Go, we tested it on three C programs featuring common vulnerabilities. By defining relevant invariants, Zorya successfully identified all issues. The first, a null dereference, was detected by checking during STORE and LOAD operations whether the pointer was null. The second, misaligned memory, was identified by verifying if the Euclidean division of the LOAD address by the loaded size yielded zero; otherwise, the memory was misaligned. The third, use of an uninitialized variable, was detected by confirming that any loaded address had been previously stored in memory. As this approach incorporates concrete execution, it avoids false positives while maintaining the efficiency and simplicity of Zorya’s commands demonstrated in RQ2. Finding 3: Our method can be used on C binaries to detect null pointer dereferences, misaligned memory bugs and the usage of simple uninitialized variables. # VII. DISCUSSION AND VISION Currently, Zorya identifies vulnerabilities related to TinyGo compiler panics in a single-threaded context. It must be extended to simulate multi-threaded execution to support programs built with the Go compiler and detect potential race conditions. On the symbolic side, Zorya still requires comprehensive evaluation, particularly in refining detection strategies (S2) and (S3) described earlier. The current constraints stem from a limited symbolic exploration depth, which hinders the discovery of complex paths leading to panics. For C binaries, Zorya performs basic checks, such as preventing invalid pointer dereferences, but additional invariants and analysis techniques could be integrated. Moreover, advanced strategies, such as intelligent classification of concolic variables, may improve its ability to detect vulnerability patterns. # VIII. RELATED WORK Table I presents a detailed comparison of prominent symbolic execution tools, emphasizing their implementation languages, intermediate representation (IR) methods, and their specific compatibility or limitations with the Go language ecosystem. Tools such as MAAT [30], Haybale [31], and SymSan [32] are primarily developed in $\mathrm { C } { + } { + }$ and Rust, utilizing LLVM Intermediate Representation (LLVM IR) to achieve symbolic execution at a low abstraction level. However, these tools inherently face compatibility challenges when directly analyzing Go binaries due to the gollvm compiler lacking of many functionalities [22]. In contrast, more versatile symbolic execution platforms such as Angr [33] and MIASM [34], despite their robustness in different binary formats and architectures, exhibit IR compatibility issues with Go binaries. This incompatibility predominantly arises because Angr relies heavily on VEX IR and a PCode emulation, which encounters difficulty accurately modeling Go-specific runtime structures, garbage collection routines, and goroutine management. Similarly, MIASM, leveraging Python-based modular architectures, experiences limited efficiency when dealing with Go’s statically compiled binaries and internal abstractions, necessitating additional translation or adaptation layers. Radius2 [35], built on Radare2 [36], uses the ESIL intermediate language within a flexible, command-driven framework. However, ESIL’s coarse abstraction often requires significant customization to support precise concolic execution, particularly for Go’s concurrency and complex memory model. Additionally, Ghidra plugins like GhiHorn [37] and CERT Kaiju [38], though not explicitly detailed in Table I, were critically evaluated for their capabilities in path-sensitive analysis and handling SMT (Satisfiability Modulo Theories) constraints directly within Ghidra’s interactive interface. These plugins are inherently limited by their tight coupling with Ghidra’s user-driven workflows and Java-based architecture, constraining their scalability and the degree of automated symbolic reasoning achievable with Go binaries. Lastly, DuckEEGO [39] is a source-level concolic execution framework for Go that operates on the abstract syntax tree (AST) prior to compilation. It supports only basic types—int, bool, and map[int]int—and lacks support for strings, structs, floating-point numbers, external libraries, and runtime functions. It also does not handle multiple return values, goroutines, or syscalls, making it unsuitable for concurrent or system-level analysis. As it transforms source code rather than binaries, it cannot be applied to closed-source programs or Go runtime internals.
The widespread adoption of the Go programming language in infrastructure backends and blockchain projects has heightened the need for improved security measures. Established techniques such as unit testing, static analysis, and program fuzzing provide foundational protection mechanisms. Although symbolic execution tools have made significant contributions, opportunities remain to address the complexities of Go's runtime and concurrency model. In this work, we present Zorya, a novel methodology leveraging concrete and symbolic (concolic) execution to evaluate Go programs comprehensively. By systematically exploring execution paths to uncover vulnerabilities beyond conventional testing, symbolic execution offers distinct advantages, and coupling it with concrete execution mitigates the path explosion problem. Our solution employs Ghidra's P-Code as an intermediate representation (IR). This implementation detects runtime panics in the TinyGo compiler and supports both generic and custom invariants. Furthermore, P-Code's generic IR nature enables analysis of programs written in other languages such as C. Future enhancements may include intelligent classification of concolic execution logs to identify vulnerability patterns.
[ "cs.SE", "cs.CR" ]
# 1 Introduction Industry and academia are increasingly using large language models (LLMs) to solve problems which require semantic understanding. These problems range from unstructured document processing [6, 11], to multi-modal question answering [34, 36, 43], to semantic search and ranking [37]. In order to achieve state-of-the-art performance on these tasks, practitioners often decompose the problem into modular subtasks within an AI program. Recently, programming frameworks including Palimpzest [23], LOTUS [29], DocETL [33], and others [2, 18, 24, 31] have proposed building these LLM-based applications out of semantic operators. Inspired by relational operators [10], semantic operators are AI-powered data transformations with natural language specifications. These include LLM-powered maps, filters, joins, aggregations, etc. and are useful for unstructured data processing tasks such as information extraction, summarization, ranking, and classification. Developers can define a semantic operator system by writing a declarative AI program in Palimpzest (or a similar framework). This defines a logical plan, which an optimizer can compile into a physical plan. As an example, Figure 1 illustrates a use case where a researcher wishes to search for papers relevant to their interests. First, the program loads the papers and filters for ones related to data systems. Then, the program computes a summary of each paper’s main contributions. Finally, the papers are classified as having high or low relevance to the author’s research interests. In order to execute this program, the optimizer must choose how to implement each semantic filter and map operation in terms of underlying physical operations (e.g., calls to LLMs or other models). A semantic operator may simply be implemented with a single invocation of a specific LLM. However, it may also be implemented with more complex techniques including a Mixture-of-Agents [40] (i.e., a layered architecture of LLM ensembles), a self-correction loop [25], or a reduced-context generation [20], to name a few. With access to a handful of models and hyperparameters, these techniques alone can provide an optimizer with thousands of physical implementation alternatives that trade-off operator quality, dollar cost, and latency (Section 4.1) Goal. The optimizer’s goal is to compile a semantic operator program to a physical plan which is (near-)optimal for the developer’s objective with respect to system quality, cost, and latency. For example, in the center and right-hand side of Figure 1, we show two physical plans for two different optimization objectives. The first plan is compiled with the goal of maximizing system output quality, while the second is compiled to maximize quality subject to spending less than $\$ 1$ on processing the entire workload. Ideally, a Workload ResearchPapes Filter by Research Area Summarize Contributions High Paper 1 Paper K Contributions Contributions Low Maximize Quality MaxQuality@Cost<\$1 Optimization Objective □ < Validation Data Input Data Scan Scan (optional) ds $\ L =$ pz.Dataset(research_papers) Filter with GPT-4o Filter with Llama3.1-8B ds $\mathbf { \Sigma } =$ ds.filter(“the paper is about data systems”) ds $\ L =$ ds.map(“summarize the main contributions”) ds $\mathbf { \sigma } = \mathbf { \sigma }$ ds.map(“classify relevance to interests in …”) Map with Mixture-of-Agents Map with GPT-4o-mini [GPT-4o, LLama3.3-70B, Mixtral] Map with Critic-and-Refine Map with Reduced-Context [GPT-4o-mini, Llama3.1-8B, GPT-4o] [GPT-4o, n=4, chunk_size=2000] Scan Filter Map Map Acc: 0.90 || Cost: \$10.0 || Latency: 400s Acc: 0.80 || Cost: \$0.75 || Latency: 300s cost-based optimizer can weigh the trade-offs of different physical operators to implement each plan in an optimal fashion. While existing semantic programming frameworks have implemented optimizers for semantic operator systems, they are customized to support a narrow set of optimizations [29, 33]. Furthermore, most optimize for a single objective—typically system output quality—and do not consider constraints on other dimensions such as system cost or latency. In brief, there does not exist a fully general purpose, cost-based optimizer for semantic operator systems. Our Approach. In this paper, we describe Abacus, a new costbased optimizer inside of Palimpzest [23], which addresses the shortcomings of existing optimizers for this setting. In contrast to prior work, Abacus can optimize system output quality, dollar cost, or latency with respect to zero or more constraints on the other dimensions. Similar to a Cascades query optimizer [7], Abacus uses implementation and transformation rules to define a valid set of physical plans. However, the lack of principled models for estimating the quality, cost, and latency of semantic operators differs from the relational query optimization setting, ultimately necessitating changes to make the optimization process tractable. For example, an implementation rule might implement a semantic map with one of many possible Mixture-of-Agents (MoA) architectures. However, it is difficult to know a priori how well a given MoA will perform on a specific task like summarizing the contributions of a research paper. Furthermore, modeling the cost and latency of the operator is challenging due to uncertainty in the number of output tokens it will generate. In contrast, relational query optimizers can use precomputed statistics and principled models like cardinality estimators to quickly estimate the cost of millions of physical operators and plans. Building an effective optimizer for semantic operator systems requires overcoming three significant challenges. First, as discussed above, it is difficult to predict the performance of an operator on a new workload. This makes sampling—i.e., processing (validation) inputs with operators and observing their quality, cost, and latency— an important tool for estimating performance. However, given the cost of invoking LLMs, Abacus must be judicious in choosing which physical operators to sample and how many samples to spend on each operator. This is especially difficult in constrained optimization settings, where Abacus must discover the Pareto frontier of physical operators as opposed to a single objective maximizing operator. To this end, Abacus modifies the traditional infinite-armed bandit algorithm [1, 4] to enable it to search for the Pareto frontier of physical operators. The algorithm can also leverage prior beliefs about operator performance to significantly accelerate its search. Second, similar to relational query optimization, the space of physical plans grows combinatorially with the number of operators in the system. However, while relational query optimizers can use precomputed statistics to estimate plans at scale, Abacus’s samplebased approach quickly becomes too expensive. To mitigate this issue, Abacus approximates system performance as a function of its individual operators’ performance. This decomposition allows Abacus to estimate the performance of a combinatorially large space of systems given a much smaller set of operator estimates. Third, the traditional dynamic programming algorithm used in Cascades [7] is not designed to support constrained optimization problems. To overcome this, Abacus implements a new ParetoCascades algorithm which keeps track of the Pareto frontier of subplans throughout the optimization procedure. Results. We implement Abacus as an optimizer in Palimpzest and evaluate its ability to optimize systems for document processing workloads in the biomedical and legal domains (BioDEX; CUAD) and multi-modal question answering (MMQA). We find that Abacus is able to identify plans with $2 0 . 3 \%$ , $1 8 . 7 \%$ , and $3 9 . 2 \%$ better quality respectively, than similar plans optimized by DocETL and LOTUS. Furthermore, plans optimized by Abacus are on-average $2 3 . 6 \mathrm { x }$ cheaper and $4 . 2 \mathrm { x }$ faster relative to the next best system on BioDEX. We also demonstrate that prior beliefs can significantly improve Abacus’s ability to (1) optimize system performance and (2) satisfy system constraints. Specifically, we show that Abacus with priors can identify plans with up to $3 . 0 4 \mathrm { X }$ better quality than Abacus without priors at a fixed sample budget. Furthermore, we show that Abacus’s Pareto-Cascades algorithm satisfies more optimization constraints than a greedy baseline in every setting we evaluate (except for one tie), and we show that its performance improves significantly when using priors. Finally, we demonstrate that Abacus satisfies constraints in a non-trivial manner and improves system performance as constraints are relaxed. In summary, we present Abacus — an extensible optimizer for semantic operator systems. Our main contributions are: Abacus an extensible, cost-based optimizer which allows for new semantic operators and optimization rules without changes to its host programming framework (Section 2). The implementation of two algorithms which enable (1) efficient search over the space of semantic operator systems and (2) constrained optimization of these systems (Section 3). Quality improvements of up to $3 9 . 2 \%$ over competing stateof-the-art systems, with cost and runtime savings of up to $2 3 . 6 \mathrm { x }$ and $4 . 2 \mathrm { x } ,$ respectively (Section 4). • An investigation of Abacus’s algorithmic contributions and a demonstration that prior beliefs can greatly improve optimization outcomes (Section 4). # 2 System Overview In this section we present an overview of Abacus. First, we provide a brief background on semantic operators and the programming frameworks which optimize them. Then, we describe the end-to-end process by which Abacus optimizes semantic operator systems. Finally, we motivate the need for two key algorithms to make Abacus’s optimization tractable, which we examine in Section 3. # 2.1 Background: Semantic Operator Systems Background and terminology. Recent work has explored the use of semantic operators to implement data processing pipelines over unstructured data. Semantic operators are a set of AI-powered data transformations which mirror and extend relational operators [10]. The key difference between semantic operators and their relational counterparts is that their semantics are specified in natural language as opposed to a SQL expression or user-defined function. As a result, these operators’ physical implementations typically require the use of one or more foundation models with semantic understanding. Table 1: Semantic operators supported by Abacus. In our implementation $d$ is a (valid) JSON dictionary, but in principle $d$ can be any serializable object. The $\cup$ symbol represents the union of output types. 𝑖 is an integer index, $P$ is a filter predicate, $V$ is a vector database, and $L$ is an integer limit. Aggregate includes group-by operations. As an example, two common semantic operators which appear in all AI data processing systems are map and filter. The program in Figure 1 contains two semantic maps: one for summarizing the contributions of a research paper and one for classifying their relevance to the users’ research interests. Each operation requires the user to specify the map instruction in natural language. As another example, the semantic filter in Figure 1 is defined with a natural language predicate to filter for research papers about data systems. Semantic programming frameworks like Palimpzest [23], LOTUS [29], DocETL [33], and Aryn [2] enable users to compose semantic operators into pipelines or directed acyclic graphs (DAGs). We refer to these computation graphs of semantic operators as semantic operator systems. Each framework implements an evolving and growing set of semantic operators, thus we highlight the operators currently supported by Abacus in Table 1, some of which were recently added to Palimpzest. Abacus is not limited to optimizing the set of operators in Table 1, and can be extended to support other operators including joins. Each semantic operator corresponds to a logical operator which may be implemented by a variety of physical operators. For example, two of the semantic map operators in Figure 1 are implemented with a Mixture-of-Agents [40] architecture and a Reduced-Context generation (Section 4.1). The former is a layered computation graph of LLM ensembles, while the latter samples only the most relevant input chunks before feeding them to an LLM. Each of these physical operators can be parameterized in numerous ways (e.g. the models and temperature settings for Mixture-of-Agents; the model, chunk size, and number of chunks for Reduced-Context) leading to a large space of physical operators. We present the full set of physical operators for Abacus in Section 4.1. Existing Optimizers. Semantic programming frameworks take different approaches to optimizing semantic operator systems. In the original paper, Palimpzest [23] optimizes programs by executing a set of “sentinel plans" which contain a diverse (but small) set of physical operators. Its optimizer then uses heuristics to extrapolate the performance of physical operators which it did not sample. LOTUS [29] optimizes semantic filter, join, group-by, and top- $\mathbf { \cdot k }$ operators by using cheap approximations to a more expensive “gold algorithm”. The gold algorithm is a predefined physical implementation of the semantic operator which is treated as an oracle. For example, LOTUS optimizes semantic joins by sampling inputs to learn the correlation between the answer produced by the gold algorithm (e.g., asking GPT-4o to evaluate every join tuple) and the score produced by a cheap proxy (e.g., the embedding similarity between join candidates). LOTUS then implements the join with a cascade where (1) all candidates scoring above a high threshold are joined, (2) all candidates scoring below a low threshold are not joined, and (3) all candidates scoring between the thresholds are processed by the gold algorithm. In general, LOTUS’ seeks to provide guarantees on operator quality (with respect to the gold algorithm) while saving cost and runtime. Finally, DocETL [33] and Aryn [2] both apply query rewrites to semantic operator systems. DocETL enables users to author data processing pipelines in YAML, where pipeline steps correspond to semantic operators. DocETL then takes the user’s pipeline and a set of “query rewrite directives" and uses the reasoning capabilities of an LLM to rewrite the pipeline in a top-down fashion, while validating performance with a separate LLM. Aryn uses an LLM to construct query plans from users’ natural language questions, which it then validates and refines with a human-in-the-loop. Given this background, we will now discuss Abacus—an extensible, costbased optimizer which applies rules to search the space of semantic operator systems and supports constrained optimization. # 2.2 Abacus Optimizer We illustrate Abacus’s end-to-end process for optimizing semantic operator systems, beginning with a high-level overview of its key steps which are shown in Figure 2: • The user provides a program, an optimization objective, the input dataset, and (optionally) a validation dataset. • Abacus compiles the program to a logical plan and then applies a set of rules to the logical plan to create an initial search space of physical operators. • Next, for each logical operator, Abacus samples a set of physical operators and processes some (validation) inputs to observe the operators’ quality, cost, and latency. Based on its observations, Abacus iteratively samples new physical operators and inputs, generates more observations, and refines its estimates of operator performance. • Once Abacus exhausts its sample budget, the operator estimates are used to construct the final physical plan. We now discuss each of these steps in more detail. Inputs and Compilation. Abacus takes four inputs: an AI program, an optimization objective, an input dataset, and (optionally) a small validation dataset. The AI program must be a pipeline or DAG of semantic operators supported by Abacus. The optimization objective is a constrained or unconstrained objective with respect to system output quality, dollar cost, and/or latency (e.g., “maximize quality and spend less than $\$ 10$ to process the input dataset"). The input dataset is an unstructured dataset of documents, images, songs, etc. which the physical implementation of the AI program will process. Finally, the validation dataset is a small set of labeled input-output pairs which Abacus can use to evaluate physical operators’ quality. For example, in Figure 1, the AI program consists of a semantic filter followed by two semantic maps. The figure shows two objectives: maximizing quality and maximizing quality subject to a constraint on cost. The input dataset is a set of research papers, and the validation dataset (not shown) could be a handful of additional research papers whose relevance has been labeled. Finally, given these inputs, Abacus compiles the program into a logical plan, where each semantic operator corresponds to a logical operator. Creation of Search Space. Once the user’s program has been compiled to a logical plan, Abacus uses (a subset of) its transformation and implementation rules to enumerate a space of valid physical operators for each logical operator. This corresponds to the Search Space in Figure 2. Each rule consists of two parts: (1) a pattern matching function which defines the logical (sub)plan the rule can be applied to and (2) a substitution function which applies the rule. Transformation rules a logical subplan and transform it into a functionally equivalent logical subplan. For example, a transformation rule may swap a filter and a map operation such that the filter is executed before the map. As another example, if a map operation computes $N$ fields, a transformation rule could split it into $N$ map operations. Implementation rules define ways to implement semantic operator(s) in a logical plan. For example, an implementation rule may implement a map operator with a Mixture-of-Agents or a Reduced-Context generation as depicted in Figure 1. Operator Sampling. Given the search space of physical operators, Abacus seeks to identify ones which can be composed into physical plans that optimize the user’s objective. For unconstrained optimization (e.g. maximizing plan quality), this implies finding high-quality physical operator(s). For optimization with constraints (e.g. maximizing plan quality subject to a cost constraint), this suggests finding physical operators which lie on the Pareto frontier of the cost vs. quality trade-off. To this end, Abacus initially samples a small batch of physical operators for each logical operator. If Abacus has access to prior beliefs about operator performance, it samples operators which are believed to lie closest to the Pareto frontier of the optimization objective. Otherwise, it samples operators at random. Given these operators, Abacus executes them on inputs sampled from the validation dataset (or the input dataset if no validation data is present). Abacus measures the quality, cost, and latency for each operator on each input. To measure quality, Abacus uses the output label(s) from the validation dataset wherever they are available. However, when no label exists—either because validation data is not provided or it does not contain some intermediate labels—then Abacus evaluates each operator’s output relative to operator with the best prior belief on its quality. Once it has estimated each operator’s quality, cost, and latency, Abacus computes the Pareto frontier of physical operators (with respect to the optimization objective) for each logical operator. Physical operators which fall too far from the frontier are removed, Inputs Logical Plan Search Space Per-Operator Cost Estimates Final Plan Optimization Objective Scan Scan operator quality cost latency Scan Filter 1 0.95 \$0.001 0.0001s <> Val(iodpationaDl)ata Input Data Filter Filter 1 Filter M Filter 4 0.75 \$0.003 0.0003s Filter 1 Map 1.3 0.78 \$0.540 4.5000s ↓ ds $\mathbf { \tau } = \mathbf { \tau }$ $=$ dpzs.fDilattear(s“eits( raebsoeuatr cdha_tapaspysetres)ms”) Map Map 1.1 Map 1.N Map 1.5 0.31 \$0.070 1.1000s Map 1.5 ds $\mathbf { \Sigma } = \mathbf { \Sigma }$ ds.map(“summarize the contributions”) ds $\mathbf { \tau } = \mathbf { \tau }$ ds.map(“classify relevance to interests”) Map 2.2 0.84 \$0.310 3.7000s Map Map 2.1 Map 2.N Map 2.2 … … … … and new operators are sampled to replace them. The next batch of inputs is then processed with the new operator frontiers, and the process repeats until the sample budget (or an upper limit on optimization cost in dollars) has been reached. Final Plan Selection. Once the sample budget is exhausted, Abacus needs to construct a final plan to use for processing the input dataset. To this end, it computes each physical operator’s average quality, cost, and latency on sampled inputs. Abacus then passes these estimates and the user’s optimization objective to its ParetoCascades algorithm (Section 3.2) which returns the optimal plan. Full Algorithm. The full algorithm for Abacus is shown in Algorithm 1. The user program is compiled into an initial logical plan on line 1. On line 2, Abacus applies implementation and transformation rules to create a search space of physical operators which can implement each logical operator. Line 3 initializes a cost model which keeps track of each operator’s average quality, cost, and latency. We describe the cost model in more detail in Section 2.3. On line 4, Abacus samples an initial “frontier" of $k$ physical operators for each logical operator. On line 7, each frontier processes a sample of $j$ inputs, updating the number of samples drawn. This also yields a set of observations of operator quality, cost, and latency, which are used to update the cost model on line 8. On line 9, operators which perform poorly are replaced in each frontier. We discuss the algorithm for updating the operator frontiers in detail in Section 3.3. Once the number of samples drawn exceeds the sample budget on line 6, the operator sampling stops. Finally, on line 12 Abacus’s Pareto-Cascades algorithm returns the optimal physical plan with respect to the the operator estimates and the optimization objective. We discuss the Pareto-Cascades algorithm in detail in Section 3.2. # 2.3 Key Challenges in Optimization We now discuss the key challenges which motivate the design of Abacus’s cost model, operator sampling algorithm, and final plan selection algorithm. Cost Model. Given a logical plan with $M$ semantic operators and a choice of $N$ physical implementations per operator, the space of possible physical plans is of size $O ( N ^ { M } )$ before even considering # Algorithm 1 Abacus algorithm Require: program $P$ , objective $O$ , val. data $D$ Parameters: budget 𝐵; 𝑘, $j$ 1: 𝑙𝑜𝑔𝑖𝑐𝑎𝑙 $\_ p l a n = { \mathrm { c o m p i l e } } ( P )$ 2: 𝑠𝑒𝑎𝑟𝑐ℎ_𝑠𝑝𝑎𝑐𝑒 $\mathbf { \tau } = \mathbf { \tau }$ applyRules(𝑙𝑜𝑔𝑖𝑐𝑎𝑙_𝑝𝑙𝑎𝑛) 3: $M =$ initCostModel() 4: $F =$ sampleOpFrontiers(𝑠𝑒𝑎𝑟𝑐ℎ_𝑠𝑝𝑎𝑐𝑒, $k$ ) 5: 𝑠𝑎𝑚𝑝𝑙𝑒𝑠_𝑑𝑟𝑎𝑤𝑛 $= 0$ 6: while 𝑠𝑎𝑚𝑝𝑙𝑒𝑠_𝑑𝑟𝑎𝑤𝑛 $< B$ do 7: 𝑜𝑢𝑡𝑝𝑢𝑡𝑠, 𝑠𝑎𝑚𝑝𝑙𝑒𝑠_𝑑𝑟𝑎𝑤𝑛 $\mathbf { \tau } = \mathbf { \tau }$ processSamples(𝐹 , 𝐷, 𝑗) 8: $M =$ updateCostModel(𝑀, 𝑜𝑢𝑡𝑝𝑢𝑡𝑠) 9: $F =$ updateFrontiers $( F , M , O )$ 10: end while 11: return ParetoCascades(𝑙𝑜𝑔𝑖𝑐𝑎𝑙_𝑝𝑙𝑎𝑛, 𝑀, 𝑂) operator re-orderings. Even for relatively modest values of $M$ and $N$ , the space of plans can grow too large to sample each plan and measure its output quality, cost, and latency on a validation dataset. To address this, Abacus makes the simplifying assumption that operators are independent, and each plan can be modeled as a function of its operators. Our model for the plan quality $( p _ { q } )$ , cost $( p _ { c } )$ , and latency $( \boldsymbol { p _ { l } } )$ as a function of its operators’ quality $( o _ { q i } )$ , cost $( o _ { c i } )$ , and latency $( o _ { l i } )$ is shown below: $$ \hat { p } q = \prod _ { i = 1 } ^ { M } \hat { o } _ { q i } \hat { \qquad } \hat { p } _ { c } = \sum _ { i = 1 } ^ { M } \hat { o } _ { c i } \hat { \qquad } \hat { p } _ { l } = \operatorname * { m a x } _ { \mathrm { p a t h } \in \mathcal { p } } \sum _ { i \in \mathrm { p a t h } } \hat { o } _ { l i } $$ Abacus models plan cost as the sum of its individual operators’ estimated cost. Similarly, it models plan latency as the maximum latency path through the semantic operator system. Plan quality is modeled by taking the product of its operators’ estimated qualities (without loss of generality, we assume $\hat { p } _ { q } , \hat { o } _ { q i } \in [ 0 , 1 ] )$ . Using a summation (or average) to compute plan quality would also satisfy the assumption of operator independence. Finally, we note that Abacus’s model of plan quality has the property that replacing any individual operator with a higher quality operator will improve plan quality. Even though this plan quality metric may not directly match the evaluation metric for the program, this property enables the optimizer to make local improvements to the plan. One limitation of this cost model is that it fails to model interactions between operators. For example, if a semantic filter uses the summary produced by an upstream map operator as input, then this cost model will fail to capture that the filter’s performance is correlated with the quality of the map operator’s summary. However, this model enables Abacus to estimate a combinatorially large space of plans from a much smaller set of sampled operators. For example, suppose Abacus samples and estimates the performance of $K$ physical operators for each of the $M$ operators in a logical plan. Given these $K \cdot M$ estimates, Abacus can model $O ( K ^ { M } )$ physical plans – even if most of them have never been sampled directly. Effectively, this approach to cost modeling trades off accuracy in estimation for an increase in the number of plans that can be modeled with some noise. Empirically, we find that giving Abacus the ability to search over a larger (albeit noisy) space of plans enables it to select better plans than competing systems on a range of benchmarks (see Section 4). Operator Sampling Challenges. For large enough $N$ , it can be computationally infeasible to sample every physical operator for even a single semantic operator. For example, in our implementation of Abacus (Section 4.1), a semantic map can be implemented with approximately 2,800 different operators. While the task of finding and choosing physical operator(s) may seem daunting, it is important to recognize that Abacus does not need to find the maximally optimal physical plan in order to provide value to the user. In most settings, Abacus simply needs to produce a plan which is “good enough" for the user’s application goals. This relaxes the operator search problem from finding the single best “needle in a haystack" to finding at least one operator from a handful of good options. In the 𝑠𝑎𝑚𝑝𝑙𝑒𝑂𝑝𝐹𝑟𝑜𝑛𝑡𝑖𝑒𝑟𝑠 function in Algorithm 1, we sample an initial set (i.e., frontier) of physical operators for each logical operator in the plan. Then, in the 𝑢𝑝𝑑𝑎𝑡𝑒𝐹𝑟𝑜𝑛𝑡𝑖𝑒𝑟𝑠 () function we update each frontier of physical operators based on their observed quality, cost, and latency on sampled inputs. We model the sampling of physical operators as a multi-armed bandit (MAB) problem. Intuitively, given a fixed sampling budget, we seek to navigate the exploration-exploitation trade-off between sampling new (potentially better) operators and sampling the previously best observed operator(s) to refine our confidence in their performance. Unfortunately, the traditional MAB formulation is focused on finding the single most-optimal arm for an unconstrained objective. However, constrained optimization requires that we account for the trade-off between the optimization objective and the constraint(s). To this end, we modify the traditional MAB formulation to encourage the exploration-exploitation trade-off of the entire Pareto frontier of operators. We formalize this algorithm in Section 3.3. Final Plan Selection Challenges. Once Abacus finishes sampling operators and constructs its final cost model, it still needs to identify and return the optimal physical plan. For a single-dimensional objective such as minimizing plan cost, Abacus can invoke a traditional Cascades [7] algorithm to recover the minimum cost plan. Note that this is not as simple as selecting the cheapest physical operator for each semantic operator—the order of operators must also be considered in the presence of filters, maps with multiple outputs, and/or joins. An overview of the traditional Cascades algorithm is provided in Section 3.1. However, for constrained optimization such as minimizing cost subject to a lower bound on plan quality, the traditional Cascades algorithm is insufficient. The key issue is that Cascades will only keep track of the “best" implementation of every subplan. However, in the constrained setting—where we care about multiple dimensions of plan performance—finding the optimal plan requires considering the Pareto frontier of optimization trade-offs at each subplan. We implement the Pareto-Cascades algorithm to overcome this challenge, and discuss its implementation in Section 3.2. # 3 Algorithms In this section we discuss the two primary algorithmic contributions within Abacus: its Pareto-Cascades and operator sampling algorithms. First, we present a high-level overview of the traditional Cascades algorithm from relational query optimization. Then, we introduce our Pareto-Cascades algorithm. Finally, we describe Abacus’s multi-armed bandit (MAB) operator sampling algorithm. # 3.1 Traditional Cascades Optimization Before discussing the Pareto-Cascades algorithm, we first present an overview of the traditional Cascades algorithm [7, 42]. Cascades takes a logical plan, a cost model, and a set of implementation and transformation rules as input. Given these inputs, the Cascades algorithm seeks to find an implementation of each operator that globally optimizes the entire plan to meet some objective (e.g. minimizing execution cost / latency). In this setting, the cost model is a function which takes a physical subplan as input and returns the expected cost / latency of the subplan. To illustrate how the Cascades algorithm works, consider the toy example shown in Figure 3. As a first step, Cascades converts the logical plan into an initial group tree. Each group represents the execution of a unique set of operators. In Figure 3, we expand the final group which represents the execution of all operators in the plan. Each group has a set of logical expressions and Logical Plan Initial Group Tree Final Group (Initial State) Final Group (Fully Optimized) Scan Group [SMF] ExpLroegsicsiaolns ExPphryesisciaolns ExpLroegsicsiaolns ExPphryesisciaolns Group 1. [SM] à F [None] Group 1. [SM] à F 1. [SM] $$ Filter 1 Map Group [SM] [SMF] [SMF] 2. [SF] à M 2. [SM] $$ Filter 6 Pr[oNpoenrtei]es Pr[oNpoenrtei]es i. [SF] à Map 10 Filter Group [S] i+1. [SF] $$ L Map 3 ·· physical expressions, which represent unique logical and physical (sub)plans which can implement that group. Initially, each group has a single logical expression which is translated directly from the logical plan (in this case, executing filter $F$ after map $M$ and scan $s$ ). Given the initial group tree, the Cascades algorithm searches the space of possible physical plans by applying a series of tasks in a dynamic programming algorithm shown in Algorithm 3. We briefly discuss the role of each task. Optimize Group. The OptimizeGroup task takes an input group, iterates over its logical and physical expressions, and returns new tasks to optimize each logical and physical expression. The Cascades algorithm begins with a task to optimize the final group (which represents the execution of the entire plan). Since the initial group tree has a single logical expression, this will result in a single new task to optimize this logical expression. As a final note, this task performs a lookup to make sure the group has not already been optimized. If it has, the task returns no new tasks. Optimize Logical Expression. The OptimizeLogicalExpr task takes the set of implementation and transformation rules and applies all applicable rules to the logical expression associated with this task. The task will iterate over each rule and call its pattern matching function to determine whether it can be applied to the logical expression. It will also perform a lookup to make sure the rule has not already been applied to the expression during the search process. For each rule which may be applied, this task will return a new task to apply that rule to the logical expression. Apply Rule. The ApplyRule task is initialized with a rule and a logical expression. The task invokes the rule’s substitution function which creates a new logical expression (if it’s a transformation rule) or a new physical expression (if it’s an implementation rule). Some transformation rules may create new groups and new logical expressions. For example, a transformation rule which swaps the filter with the map will create a new group [SF]. Any new groups (and expressions) are added to the space of all groups $G$ in Algorithm 3. Finally, for each new logical expression, physical expression, and group, this task will return a new OptimizeLogicalExpr, OptimizePhysicalExpr, and OptimizeGroup task, respectively. Optimize Physical Expression. The OptimizePhysicalExpr task takes in the set of all groups and the cost model and computes the minimum cost of this physical expression. If the physical expression has an unoptimized input group, then this task will immediately schedule a new task to optimize that group. Once the input group(s) have been optimized, the task applies the cost model to estimate the cost of executing this physical expression on top of the optimal (i.e. min. cost) execution of the input group(s). The physical expression’s cost is then updated and no new tasks are scheduled. We present the full Cascades algorithm in Algorithm 2. The algorithm takes a logical plan, a cost model, and a set of rules as input. It constructs the initial groups (i.e. the group tree) and then invokes the plan search procedure in Algorithm 3. Once the search finishes, the group tree is traversed to construct the final physical plan by using the physical operator in the optimal (i.e. min. cost) physical expression for each group. With this understanding in place, we will now discuss Abacus’s Pareto-Cascades algorithm. # 3.2 Pareto-Cascades Optimization As discussed in Section 2.3, the Cascades algorithm is not designed to handle optimization problems with constraints on other dimensions. The key issue is the Principle of Optimality, which states that every subplan of an optimal physical plan is itself optimal. This principle enables the OptimizePhysicalExpr task to optimize each physical expression by composing it with the optimal expression for its input group. This is insufficient for problems such as minimizing cost with a lower bound on plan quality, because selecting the minimum cost expression for each group may result in constructing a plan which fails to meet the quality constraint. In order to address this issue, each dimension of the optimization problem (e.g. cost and quality) must be accounted for. Fortunately, there is a natural way to extend the Principle of Optimality into the constrained optimization setting, which we present as a theorem: Theorem 3.1. (Under the operator independence assumptions of our cost model in Section 2.3) every subplan of a Pareto-optimal physical plan is itself Pareto-optimal. Proof. We prove this by contradiction. Assume a Pareto-optimal physical plan $P$ has a subplan $s$ which is not Pareto-optimal. By the definition of $s$ not being Pareto-optimal, there exists a subplan $S ^ { \prime }$ which dominates $s$ . Replacing 𝑆 with $S ^ { \prime }$ strictly improves the quality, cost, and/or latency of the subplan. Given the operator independence assumptions of our cost model in Equation (1), strictly improving the subplan will also strictly improve the quality, cost, and/or latency of the entire physical plan. This new physical plan $P ^ { \prime }$ will be strictly better than our original plan $P$ – but this contradicts our assumption that the original plan $P$ is Pareto-optimal. Thus, our theorem is true. □ This theorem allows us to extend the Cascades algorithm to the constrained optimization setting by modifying each group to maintain its Pareto frontier of physical expressions during the search procedure in Algorithm 3. For example, if a user’s objective is to maximize plan quality with an upper bound on plan cost, then each group needs to maintain its set of physical expressions which are Pareto-optimal with respect to quality and cost. The OptimizePhysicalExpr task must also be modified to compute the Pareto frontier of executing the current physical expression with any of the Pareto-optimal expressions from its input group(s). Finally, once the search procedure is finished, the Pareto-optimal plan must be recovered by recursively composing all Pareto-optimal subplans before selecting the final plan which is optimal for the given optimization objective. We present the algorithm for our new Pareto-Cascades algorithm in Algorithm 4. We use the same function for searching the plan space with the modifications described in the previous paragraph. The 𝑔𝑒𝑡𝑃𝑎𝑟𝑒𝑡𝑜𝑂𝑝𝑡𝑃𝑙𝑎𝑛𝑠 () function is similar in spirit to 𝑔𝑒𝑡𝑀𝑖𝑛𝐶𝑜𝑠𝑡𝑃𝑙𝑎𝑛() in Algorithm 2, except it builds and returns a list of Pareto-optimal plans. The 𝑠𝑒𝑙𝑒𝑐𝑡𝑂𝑝𝑡𝑖𝑚𝑎𝑙𝑃𝑙𝑎𝑛() function picks the plan on the Pareto frontier which is optimal for the optimization objective $O$ (e.g. selecting the max quality plan which is cheaper than a cost upper bound). Finally, we note that in the case of unconstrained optimization, this algorithm naturally reduces to the traditional Cascades algorithm. # 3.3 Multi-Armed Bandit Operator Sampling The second key optimization challenge in Abacus is choosing which physical operators to sample in order to obtain estimates of operator quality, cost, and latency. As discussed in Section 2.3, we assume that the number of physical operators $N$ is large enough that Abacus cannot realistically sample every physical operator. To overcome this issue, we draw inspiration from the infinite-armed bandit problem [1, 4], which can also serve as a model for settings with more arms than total samples. In our setting, the physical operators comprise the “arms" of our search space and we are given an initial sample budget $B$ . At each step of the search, we must choose a physical operator to sample (decreasing our budget by one) and obtain a stochastic observation of that operator’s performance. In contrast to the traditional multiarmed bandit (MAB) setting, where the objective is to identify the single best arm achieving the highest performance in expectation, Abacus’s goal is to identify the potentially many physical operators which lie on the Pareto frontier of its optimization objective. We present Abacus’s MAB operator sampling algorithm in Algorithm 5. The inputs to the algorithm are an initial set of physical operator frontiers $F$ (one for each logical operator, from line 4 of Algorithm 1), the cost model $M$ , and the optimization objective $O$ . The algorithm begins by computing the upper confidence bounds (UCBs), lower confidence bounds (LCBs), and means for each operator on each metric of interest for the objective $O$ . The equations for computing the UCB and LCB of a given metric are shown below: $$ u c b _ { m , i } = \mu _ { m , i } + \alpha \cdot \sqrt { \frac { \log ( N ) } { n _ { i } } } l c b _ { m , i } = \mu _ { m , i } - \alpha \cdot \sqrt { \frac { \log ( N ) } { n _ { i } } } $$ The $\mu _ { m , i }$ term is the sample mean of the observed performance for the given metric $m$ (e.g. operator latency) for the $i ^ { t h }$ physical operator. $N$ is the total number of samples drawn and $n _ { i }$ is the number of samples drawn for the $i ^ { t h }$ physical operator. Finally, $\alpha \in [ 0 , 1 ]$ is the exploration coefficient, which we dynamically scale to be 0.5 times the spread between the largest and smallest observed metric values across all physical operators. Once the UCBs, LCBs, and means are computed for every operator and metric, we compute the set of Pareto-optimal operators based on their mean performance. Then, for each operator in the frontier, we check whether its upper confidence bound overlaps with the lower confidence bound of at least one operator on the Pareto frontier. Such an overlap implies that there’s enough uncertainty in our estimates of operator performance that it is possible for the operator to lie on the Pareto frontier. If no overlap exists, then we remove the operator from the frontier and sample a replacement from our reservoir of not yet sampled physical operators. This completes the update of the operator frontier. The key difference between this algorithm and a traditional UCB algorithm for MABs is that we must consider overlap between each operator and the current Pareto frontier of sampled operators. Overlap on any dimension implies that the operator may still be Pareto-optimal, thus eliminating operators from consideration can be slightly more sample intensive and time consuming. In order to speed up the algorithm, we construct batches of samples rather than processing one sample at-a-time. Finally, one benefit of this problem formulation is that it allows for a number of extensions which can accelerate Abacus’s search for Pareto-optimal operators. For example, if there exist prior beliefs about operator performance, Abacus can use them to inform its initial operator frontier as well as the next operator(s) it draws from the reservoir during replacement. Furthermore, if Abacus has access to learned embeddings for each operator, then it can model the search as a contextual MAB [21, 39] which, under some theoretical assumptions, should perform better than its current context-free approach. We explore the benefit of prior beliefs on operator performance in section Section 4.4, and leave a contextual bandit algorithm to future work. # 4 Evaluation We evaluate Abacus on a diverse set of benchmarks to examine four experimental claims. First, we demonstrate that semantic operator systems optimized by Abacus outperform similar systems optimized by prior work. Second, we show that Abacus can leverage prior beliefs to identify better plans with fewer samples. Third, we demonstrate that Abacus’s Pareto-Cascades algorithm is crucial for satisfying constraints in constrained optimization. Finally, we show that Abacus improves system performance as constraints are relaxed. # 4.1 Implementation We implement Abacus as an optimizer inside of the open-source Palimpzest [23] framework, which supports all of the semantic operators in Table 1. We wrote standard implementation rules for each semantic operator to provide Abacus with the ability to implement any Palimpzest program. We also implemented the following implementation rules for optimizing map, filter, and retrieve operators: • Model Selection: this rule implements map and filter operators with a single LLM call. The rule is parameterized by the set of models supported by Palimpzest. • Mixture-of-Agents: this rule implements a map operator with a Mixture-of-Agents architecture [40] consisting of an ensemble of proposer models followed by an aggregator model. The rule is parameterized by (1) the size of the ensemble (1-3 proposers), (2) the model used for each proposer, (3) the model used for the aggregator, and (4) the temperature of the proposers (0.0, 0.4, or 0.8). • Reduced-Context Generation: this rule implements a map operator in a two-step process. First, the input is chunked and embedded, then the top-k embeddings (based on similarity with the map instruction) are provided as input to the map. The rule is parameterized by the size of the chunks (1000, 2000, or 4000 characters) and $k$ (1, 2, or 4). • Critique-and-Refine: this rule implements a map operator in a three-step process. An initial model generates an output, which is then critiqued by a second model, before a third and final model generates a refined output. The rule is parameterized by the model used for each step. • Retrieve: this rule implements a retrieve operator. The rule is parameterized by the value $k$ which determines the number of output objects returned by the retrieve operation. These rules provide Abacus with $\sim 3 , 0 0 0$ physical operators when configured with access to all supported LLMs. For our experiments, unless stated otherwise, we provided Abacus with access to GPT-4o, GPT-4o-mini, Llama-3.1-8B, Llama-3.3-70B, Llama-3.2-90B-Vision, Mixtral- $\cdot 8 \mathbf { x } 7 \mathrm { B }$ , and DeepSeek-R1-Distill-Qwen-1.5B. Finally, we implemented a transformation rule for re-ordering filter operations. # 4.2 Benchmarks We evaluate Abacus on three benchmarks for processing unstructured documents. These benchmarks go beyond simple question answering and decompose naturally into tasks involving inference, retrieval, and reranking. We describe each benchmark below. BioDEX. The BioDEX benchmark [6] is an extreme multi-label classification problem from the biomedical domain. Each input is a document describing adverse reaction(s) experienced by a patient in response to taking a drug. In line with prior work [5, 29, 33], we focus on the task of producing a ranked list of the adverse reactions experienced by the patient. The dataset defines a set of 24, 312 possible reaction labels, however the median document only has 3 ground truth labels which apply to it. Success on this task is measured by the rank-precision (RP) of the output rankings at a specified threshold $K$ (i.e. $\mathrm { R P } @ \mathrm { K } )$ . Specifically, given a set of $N$ ground truth reaction labels, $\mathbb { R P } @ \mathbb { K }$ measures the precision $\textstyle { \mathcal { O } } \mathrm { K }$ of the output ranking when $K \leq N$ and it measures the recall@K of the ranking when $K \geq N$ . CUAD. The CUAD benchmark [11] presents a task in semantic understanding of legal contracts. Each input in the benchmark is a legal contract. Given a set of 41 contract clauses, the task is to predict the span(s) in the contract which correspond to each clause (the ground truth for a single clause spans $\sim 0 . 2 5 \%$ of the document on average). If the contract does not specify a given clause, then the output for that clause should be null. Success on this task is measured by the F1-score of the clause predictions. For each prediction, we determine its correctness by thresholding the Jaccard similarity between the words in the prediction and the ground truth. While the original benchmark uses a threshold of 0.5, recent work [33] used a threshold of 0.15 which we also use in our evaluation. MMQA. The Multi-Modal Question Answering dataset [36] contains 29,918 questions which involve reasoning over images, text, and/or tables. For example, a question might ask “When was the famous painting with two touching fingers completed?” Answering this question may require looking at an image of the painting and extracting its completion date from a table or text snippet. While some questions in the dataset are limited to a single modality, many involve multi-hop reasoning over multiple modalities. We focus our evaluation on questions which require reasoning over images and text, images and tables, or all three data modalities. Success on this task is measured by F1-score on question answers, as the ground truth label for every question is a (possibly singleton) list of outputs. The dataset has a train, dev, and test split, however answers are not publicly available for the test split. Thus, we use the dev split for our evaluation. # 4.3 Systems Optimized by Abacus Outperform Prior Work To evaluate our first experimental claim, we compare Abacus to DocETL [33] and LOTUS [29] as these frameworks also optimize systems of semantic operators. In order to maintain parity with their evaluations, we restrict each system to using the GPT-4o-mini, text-embedding-3-small, and clip-ViT-B-32 models for all operations (the latter is only used for MMQA image embeddings). Implementations. For the BioDEX workload, we use code provided by the authors of DocETL and LOTUS to evaluate their systems. The LOTUS code computes a semantic join between each input medical document and the list of reaction labels before using a semantic map operation to rerank the labels. The DocETL code computes an equijoin between the input medical document and the list of reaction labels, before applying a reduce operation to rerank the labels. For Abacus, we implement a pipeline in Palimpzest which joins input medical documents to the most similar reaction labels using a semantic map and semantic retrieve operator, before reranking the labels using a semantic map operator. For the CUAD benchmark, each framework implements the pipeline as a single map (or extract) operation which computes all 41 of the output clauses. The code for DocETL was provided by the authors, while the code for LOTUS and Abacus were trivially implemented using a single operator. Finally, for the MMQA benchmark—since MMQA asks questions about Wikipedia content which likely exists in the training dataset of GPT-4o-mini—we implement a simple baseline which asks GPT-4o-mini to answer each question without any relevant image, text, or table content. This represents a lower bound on expected benchmark performance, and helps isolate the performance boost provided by LOTUS and Abacus. As of this writing, DocETL does not support image inputs so we omit it from our evaluation. For LOTUS, we tried implementing semantic joins to match the input question to the relevant images, text snippets, and tables, respectively. However, due to the extremely low selectivity of the join predicates, we found that the optimized cascade pipelines could incur millions of LLM calls, which we could not afford. Instead, we used three semantic similarity joins to match the input question to relevant images, text snippets, and tables, respectively, before computing the final answer using a semantic map operation given the retrieved images, text, and table. For Abacus, we implement a similar pipeline in Palimpzest which executes three retrieve operators to join the input question to the relevant images, text snippets, and tables. We then apply a final map operation to compute the answer to the question given the retrieved images, text, and tables. Results. We execute each system on the BioDEX, CUAD, and MMQA benchmarks with the objective of maximizing output quality. We sampled the test split of each benchmark 10 times, ran each framework on each sampled split, and measured the output quality, execution cost in dollars, and latency in seconds. For Abacus, we used 150, 50, and 150 samples for BioDEX, CUAD, and MMQA, respectively. We report the mean and standard deviation of these measurements. For DocETL and Abacus, which have distinct optimization and execution stages, we also break out the cost of optimization and the cost to execute the final optimized system. The results of our evaluation are shown in Table 2. Overall, Abacus is able to maximize quality better than all competing systems. On BioDEX, CUAD, and MMQA Abacus achieves $2 0 . 3 \%$ , $1 8 . 7 \%$ , and $3 9 . 2 \%$ better mean quality, respectively, than the next best system. Furthermore, on BioDEX, Abacus is able to achieve better quality while also providing cost and latency savings of $2 3 . 6 \mathbf { x }$ and $4 . 2 \mathrm { x }$ relative to the next best system. The key drivers of Abacus’s performance improvements vary across benchmarks. LOTUS optimizes semantic joins by sampling and evaluating join tuples in order to learn thresholds for a cascade. Unfortunately, the quality of the cascade is subject to variance and largely depends on how well the sampled join tuples represent the overall join. On BioDEX, in the worst case, LOTUS produces joins which require $> 1 0 0$ , 000 LLM invocations, leading to high runtime and cost. In contrast, Abacus spends its sample budget optimizing the physical operators for each semantic map and the value of $k$ for the retrieve. Abacus’s MAB sampling algorithm enables it to quickly eliminate bad physical operators, and we observe that (1) it is able to identify plans with better overall quality (on average) and (2) its plans exhibit less variance in performance than both LOTUS and DocETL. For most seeds, we find that Abacus implements both semantic maps with a Reduced-Context Generation operator, which suits the BioDEX task well as large portions of the input are irrelevant for the task at hand. Table 2: Performance on the BioDEX, CUAD, and MMQA benchmarks for systems optimized to maximize quality. Quality is measured using $\mathbf { R P } @ \mathbf { K }$ for BioDEX and F1 score for CUAD and MMQA. Mean values are shown with their standard deviation. For CUAD, DocETL’s LLM agent spends anywhere from $2 0 \AA - 4 0 \zeta$ minutes decomposing the map operation into a multi-stage data pipeline. In our 10 trials, we observe that DocETL rewrites the map into a pipeline with anywhere from 2 to 7 operations. This variance in the depth of the pipelines generated by DocETL’s optimizer ultimately leads to large variance in the performance of its pipelines. Interestingly, we find that DocETL’s pipelines perform best (achieving up to $6 3 . 7 \%$ F1-score) when it composes a 3-step pipeline and perform much worse (as low as $3 5 . 3 \%$ F1-score) on its deeper 7-step pipelines. By contrast, Abacus spends its entire sample budget searching for the optimal implementation of its single map operation. We observe that it consistently implements the semantic map with a Mixture-of-Agents operator that aggregates answers from multiple models. (LOTUS does not optimize map operators so its implementation is a single call to GPT-4o-mini, which is cheap and fast, but achieves low performance.) Finally, we implemented five LOTUS pipelines for MMQA, each using a different value of $k \in [ 3 , 5 , 1 0 , 1 5 , 2 0 ]$ for the semantic similarity join operations. We show the pipeline with the best performance (corresponding to $k = 3$ ) as well as the pipeline with the most similar cost to Abacus $( k = 1 5 )$ ). While these pipelines improve upon the naive GPT-4o-mini baseline due to their ability to retrieve context, they cannot optimize each operation’s $k$ value to tune how much context is passed to the final map operation. In contrast, Abacus is able to optimize these $k$ values on a per-operator basis (in addition to selecting a physical operator for its map operation) which boosts its performance further. We do observe that Abacus’s plans are significantly slower than those produced by LOTUS. This is a limitation of Abacus’s multi-armed bandit algorithm, which currently samples operators in a sequential fashion. In future work, we will seek to pipeline the sampling of operators in different operator frontiers. # 4.4 Performance Improves with Better Priors For our second experimental claim, we ran Abacus on the CUAD and BioDEX benchmarks with and without prior beliefs while also varying the sample budget. We omitted MMQA from our evaluation as the number of physical operators for multi-modal operations is small enough to sample exhaustively. We examined two optimization objectives: maximizing quality and maximizing quality subject to a cost constraint. The cost constraints for CUAD and BioDEX were set equal to the 25th percentile of plan execution costs we observed in the unconstrained setting—thus making them non-trivial to satisfy. Per our discussion at the end of Section 3.3, we aimed to show that Abacus could leverage prior beliefs to identify more optimal plans with fewer samples. For each benchmark, we used two sets of prior belief(s). The first “naive" prior estimated each operator’s quality as an average of its model(s’) performance on the MMLU-Pro benchmark [41]. It also estimated the cost of each operator by averaging its cost per-input token and cost per-output token. This prior is free to compute, but lacks fidelity in the accuracy of its estimates. The second “sample-based" prior estimated operator performance by running each operator on 5 samples from the train split of the respective dataset. This prior is more expensive to compute, but has higher fidelity in its estimates. In practice, one might generate a strong prior by running each operator on a suite of benchmarks once offline, and then amortize the cost of computing the prior across many future workloads. The results of our evaluation are shown in Figure 4. Overall, we observe that Abacus produces plans with higher quality when provided with prior beliefs. In the unconstrained setting, plans optimized with prior beliefs perform up to $1 . 6 0 \mathrm { x }$ and $1 . 4 3 \mathrm { x }$ better (at a fixed sample budget) than those optimized without priors on CUAD and BioDEX, respectively. This gap is even greater in the constrained optimization setting, where plans optimized with prior beliefs perform up to $3 . 0 2 \mathrm { x }$ and $2 . 0 1 \mathrm { x }$ better (at a fixed sample budget) than those optimized without priors on CUAD and BioDEX, respectively. This latter result comes from the fact that identifying a good Pareto frontier of operators is more difficult than identifying a single best operator, thus having a prior belief over the entire frontier provides greater benefit relative to sampling without priors. Figure 4: System output quality as a function of the sample budget when optimizing with (1) no priors, (2) naive priors computed from MMLU-Pro performance, and (3) priors computed with samples from each benchmark’s train split. We optimize CUAD and BioDEX with unconstrained and constrained objectives. For constrained optimization, we set the cost constraint to be the 25th percentile of plan costs observed in the unconstrained setting. Overall, Abacus yields better plans in the constrained and unconstrained settings when leveraging prior beliefs on operator performance. Figure 5: The fraction of plans which satisfy the optimization constraint when maximizing quality with an upper bound on cost on BioDEX. We ran Abacus with its Pareto-Cascades algorithm and a Greedy algorithm with three sample budgets on 10 unique slices of the BioDEX dataset per sample budget. Pareto-Cascades identifies more plans which satisfy the constraint than the Greedy baseline for each sample budget and prior beliefs scenario (with one exception where neither algorithm identifies such a plan). # 4.5 Pareto-Cascades Satisfies Constraints Better than Greedy Baseline For our third experimental claim, we sought to demonstrate that the Pareto-Cascades algorithm is crucial for performing optimization with constraints. In our discussion in Section 3.2, we claimed that optimizing multi-step pipelines across more than one dimension (e.g. maximizing quality with an upper bound on cost) requires an algorithm which considers the Pareto frontier of operators at each step. In theory, a greedy algorithm which only considers maximizing quality at each step could find itself unable to satisfy the constraint once it reached the later step(s) in the pipeline. To evaluate this hypothesis, we ran Abacus on the BioDEX benchmark with and without the Pareto-Cascades optimization algorithm. We chose BioDEX for our evaluation as it is the only benchmark involving more than one costly semantic map operation. We set the optimization objective to maximize quality with a cost constraint lower than the mean cost per record of Abacus’s plan in Table 2. Given that the plans in Table 2 could only use GPT4o-mini—one of the cheapest models at Abacus’s disposal—this constraint is difficult to satisfy with most physical operators. As a result, we replaced GPT-4o with Llama-3.1-3B for this experiment because no plan with GPT-4o could satisfy the constraint. For our baseline, we replaced the Pareto-Cascades algorithm with a modified form of the traditional Cascades algorithm that selects the maximum quality (sub)plan for each group which does not exceed the cost constraint. If a (sub)plan cannot be constructed such that it satisfies the constraint, then the maximum quality plan is accepted regardless of the constraint. Effectively, this algorithm represents a greedy scheme in which the plan is myopically built using the highest quality subplan(s). The downside of this greedy approach is that it may select operators which are too expensive early on, and limit its ability to satisfy the cost constraint in the later stages of plan construction. We ran Abacus with each algorithm on 10 samples of the BioDEX test split for three different sample budgets. The results of our evaluation are shown in Figure 5. First, the Pareto-Cascades algorithm identifies more plans which satisfy the cost constraint in all but one setting (in which they tie). Furthermore, when using sample-based priors, Abacus is able to satisfy the cost constraint $1 0 0 \%$ of the time for all three sample budgets. Finally, we observe that increasing the sample budget generally improves the optimizer’s ability to construct plans which satisfy the cost constraint. This is fairly intuitive, as greater sample budgets allow for more physical operators to be sampled, leading to a larger search space of possible plans. Overall, these results demonstrate that the Pareto-Cascades algorithm is necessary in order to satisfy constraints in optimization, and it does this more effectively than a greedy modification of the traditional Cascades algorithm. Figure 6: The performance of plans optimized for maximum quality subject to a cost constraint on BioDEX and CUAD. We show the mean and $9 5 \%$ confidence intervals for 10 plans optimized at each constraint with the same fixed sample budget. For optimization without priors, the plan performance generally improves as the cost constraint is relaxed. For optimization with priors, plan performance is more stable as the prior beliefs on operator performance help Abacus identify good plans even with tight cost constraints. # 4.6 Abacus Leverages Relaxed Constraints For our final experimental claim, we used Abacus to optimize plans for BioDEX and CUAD with the objective of maximizing quality subject to a cost constraint. We varied the cost constraint from unconstrained optimization down to $\$ 1$ , which is $1 1 . 8 \%$ and $1 6 . 2 \%$ of the median cost of an unconstrained plan on BioDEX and CUAD, respectively. Tightening constraint(s) during optimization constrains the space of systems available to Abacus as fewer systems are able to satisfy the constraint(s). Thus, our goal was to demonstrate that Abacus responds to looser constraints by identifying more optimal plans, or vice-versa, that Abacus responds to tighter constraints with non-trivial system implementations. For each cost constraint, we used Abacus to optimize plans for maximum quality with 10 different test splits of the BioDEX and CUAD datasets. We used the same sample budget at each cost constraint and optimized with and without prior beliefs. The results of our evaluation are shown in Figure 6. For optimization without prior beliefs, Abacus is generally able to identify plans which achieve better quality as the cost constraint is relaxed. Furthermore, Abacus still identifies plans which achieve non-trivial performance when optimizing under tight constraints. When optimizing with prior beliefs, we see a much smaller degradation in performance as the cost constraint is tightened. For example, without prior beliefs, performance on BioDEX decreases by $4 5 . 6 \%$ from having no constraint to a constraint of $\$ 1$ . However, with prior beliefs it only decreases by $1 2 . 5 \%$ at its lowest point with a constraint of $\$ 4$ This is due to the fact that prior beliefs on operator performance help guide Abacus’s MAB sampling algorithm to prioritize operators which lie on the entire Pareto frontier of the cost vs. quality trade-off. # 5 Related Work Now that we have discussed Abacus in detail, it is useful to illustrate how it differs from related work. We begin with a comparison to other frameworks and their optimizers, before providing some background on semantic operators and relational query optimization. Optimizing Semantic Operator Systems. Recent work has investigated the optimization of semantic operator systems [2, 14, 22– 24, 29, 33, 38]. Palimpzest [23] is the core framework into which we have integrated Abacus. However, Palimpzest has been updated since its most recent paper; it now supports the operators described in Table 1 and uses the optimization framework described in this paper. LOTUS [29] supports a similar set of operators to those in Table 1, but it also supports semantic joins. LOTUS primarily optimizes semantic join, filter, group-by, and top- $\mathbf { \nabla } \cdot \mathbf { k }$ operators. For semantic joins and filters, it processes a sample of operator inputs with a “gold algorithm" and a cheap proxy. It then tunes thresholds on the proxy scores to implement a model cascade which offloads most data processing to the cheap proxy, while providing statistical guarantees on the quality of the cascade relative to the gold algorithm. In contrast to Abacus, LOTUS’ will not explicitly optimize its operators to satisfy constraints on system cost and latency. Instead, it relies on the developer to manually tune cascade thresholds. DocETL [33] provides developers with the ability to author data processing pipelines in a YAML domain-specific language. Like LOTUS and Palimpzest, it supports a similar set of operators to those in Table 1, but it also introduces two new operators for entity resolution and context management of operator inputs. DocETL optimizes its pipelines through query rewrites. An optimizer LLM agent applies query rewrite directives to modify the current state of the pipeline, and a validator LLM agent evaluates the effect of these rewrites on sampled inputs. In contrast to Abacus, DocETL currently only optimizes pipeline quality without considering constraints on pipeline cost or latency. Similar to DocETL, Aryn [2] applies rewrites to query plans with an LLM agent. However, it also uses an LLM to generate query plans from users’ natural language questions, and it focuses more on human-in-the-loop evaluation and debugging to validate the plans it generates. VectraFlow [24] built a stream processing engine with support for vector data and vector-based operations. Meanwhile, Caesura [38], EVA [14], and ZenDB [22] integrated semantic operators into systems which support SQL queries over multi-modal, video, and text data, respectively. Caesura uses traditional SQL operators found in SQLite (such as joins, aggregations, etc.) and augments them with TextQA and VisualQA operators for question answering over text and image inputs. ZenDB supports ad-hoc SQL queries over document collections and uses LLMs for selection and projection operations. EVA augmented a SQL-style query language with semantic UDF operator(s) for video processing, but it ultimately included semantic operators for processing images and text as well. Optimizing More General AI Systems. There is also a large body of work on building AI systems that go beyond using semantic operators. We focus our discussion on frameworks which treat the optimization of these systems as a primary challenge. DSPy [18, 28, 35] enabled users to construct and optimize “language model programs", i.e. workflows composed of modular operators which can be optimized in a declarative manner. The main levers of optimization included prompt optimization, parameter optimization, and model finetuning. These techniques were explored further in two subsequent papers [28, 35] which aimed to address key challenges in optimizing pipelines of modular operators with only end-to-end labels. While Abacus faces a similar challenge in optimizing systems with many AI operators, it differs in the specific set of optimizations it considers and in its optimization algorithm(s). More recently, AFlow [12, 44], Archon [31], and ADAS [13] all explored automatically constructing AI systems for a given workload. AFlow seeks to fully automate AI system development by using LLMs as optimizers within a Monte Carlo Tree Search (MCTS) algorithm that explores a search space of computation graphs and operator implementations. Archon [31] uses Bayesian optimization to perform a search over predefined set of “inference-time architectures", which are effectively pipelines composed of various modular operators. ADAS proposes a “Meta Agent Search" algorithm, in which an LLM is given a (possibly empty) archive of operators as well as some building blocks (e.g. tools and foundation models) and is asked to generate novel operators to solve a given benchmark. In contrast to these works, Abacus focuses solely on declarative optimization of semantic operator systems. Semantic Operators. Early work on semantic operators focused on adding machine learning classifiers to data management systems (in the form of UDFs) for tasks requiring semantic understanding such as image classification, object detection, sentiment analysis, and more [3, 15–17, 19, 30]. One limitation of these systems was that they struggled to answer queries which did not align well with the task their UDF was designed for. For example, a UDF for detecting cars in video frames would struggle to answer a query seeking to count the number of pedestrians in the same video. The advent of foundation models presented a potential solution to this problem, because the models could (in theory) answer arbitrary queries for a given input data modality. Working in this direction, Caesura [38], ZenDB [22], and EVA [14] used LLMs and vision models as semantic operators which assisted in answering queries over multi-modal, text, and video domains, respectively. These systems could support a broader range of SQL queries than those from the previous generation, however these queries still only reflected a subset of the tasks AI developers wish to perform. In an effort to support newer AI workloads, more recent systems sought to leverage semantic operators as the building blocks for semantic operator systems [2, 23, 24, 29, 33]. These systems do not strictly adhere to the relational model and support a mix of traditional relational operators (e.g. filter, map, aggregate, etc.) and newer operators (top- $\cdot \mathbf { k } .$ , retrieve, and others). In contrast to the aforementioned work, Abacus does not contribute new semantic operators. Instead, it aims to optimize systems of semantic operators in a more extensible manner. Crucially, Abacus’s framework for optimization is not tied to a specific set of semantic operators, and can be extended to the operator sets described in other papers through the addition of implementation and transformation rules. Relational Query Optimization. There is a long and rich literature on query optimization in database systems, most of which is focused on optimizing the execution of relational queries $[ 7 -$ 9, 26, 27, 32]. From this line of work, Abacus most closely resembles a Cascades optimizer [7]. There are two key challenges which make optimizing semantic operator systems different from optimizing relational queries. First, the quality of a semantic operator is not guaranteed to be perfect. Thus, Abacus must be able to estimate the quality of an operator, possibly without the use of precomputed statistics. Second, in order to support constrained optimization (e.g. “maximize output quality without spending more than $\$ 1"$ ) Abacus cannot rely on the principle of optimality to prune sub-plans during its plan search. This necessitates Abacus’s use of a new dynamic programming algorithm which maintains the Pareto frontier of physical plans for every subplan in its search.
LLMs enable an exciting new class of data processing applications over large collections of unstructured documents. Several new programming frameworks have enabled developers to build these applications by composing them out of semantic operators: a declarative set of AI-powered data transformations with natural language specifications. These include LLM-powered maps, filters, joins, etc. used for document processing tasks such as information extraction, summarization, and more. While systems of semantic operators have achieved strong performance on benchmarks, they can be difficult to optimize. An optimizer for this setting must determine how to physically implement each semantic operator in a way that optimizes the system globally. Existing optimizers are limited in the number of optimizations they can apply, and most (if not all) cannot optimize system quality, cost, or latency subject to constraint(s) on the other dimensions. In this paper we present Abacus, an extensible, cost-based optimizer which searches for the best implementation of a semantic operator system given a (possibly constrained) optimization objective. Abacus estimates operator performance by leveraging a minimal set of validation examples and, if available, prior beliefs about operator performance. We evaluate Abacus on document processing workloads in the biomedical and legal domains (BioDEX; CUAD) and multi-modal question answering (MMQA). We demonstrate that systems optimized by Abacus achieve 18.7%-39.2% better quality and up to 23.6x lower cost and 4.2x lower latency than the next best system.
[ "cs.DB", "cs.AI", "H.2.4; I.2.5" ]
# 1 Introduction Generative AI tools (GenAI), as exemplified by a range of generative pretrained transformers (GPTs) that began entering the market in November, 2022 [70], have been broadly heralded as transformational tools for productivity and efficiency [121, 75]creativity [108], and the alleviation of tedium [90]. To pursue such promises, individual organizations have begun to implement customized and purpose-specific applications of GenAI as a way to leverage prior computational work (e.g., the production of documents and software code) for the generation of new work products by fine-tuning and augmenting existing so-called “foundation models". Such pursuits expand the scope of how work is accomplished cooperatively through computation by placing workers that are spatially and temporally disconnected from one another into relationships that do not easily fit into existing frameworks for understanding human-computer interactions [3]. As will be explored in greater depth below, the spatial and temporal dislocations represented by foundation models have played a significant role in the host of concerns being raised about the propensity of GenAI to "hallucinate" [82], "bullshit" [112, 94], and "confabulate" [130]. Put less dramatically, GenAI is thought to possibly introduce inaccuracies into existing workflows, and therefore to raise the question of whether or not they can be relied upon for high-stakes cooperative work domains where accuracy and precision are paramount. Additionally, early research indicates that indiscriminate use of GenAI tools in software development teams can increase the number of software flaws as it increase software programmers’ coding speed [92] or lead to “deskilling" of programmers and other knowledge-workers through over-reliance on such tools[128]. This tension between the promise of productivity and concerns about inaccuracies is amplified by increasing pressure placed on workers to use such tools, which raises a significant set of questions about whether and to what extent those who work in domains that demand accuracy and precision (e.g., medicine, engineering, scientific research) trust GenAI outputs to be accurate, how they manage inaccuracies, how they might still use GenAI outputs productively even if those outputs are inaccurate or imprecise, and how they might effectively organize efforts to manage the effectiveness of such tools within the context of teamand organization-wide goals [25, 44, 50, 10]. This paper addresses these broad questions by reporting on a series of intensive qualitative interviews $( \mathbf { n } = 1 7$ ) with engineers who use GenAI for hardware and software engineering applications related to integrated circuit (IC) design. These interviews were designed to answer our empirical research questions about how work practices were (and were not) changing to ensure desired levels of precision in work products, the extent to which engineers encounter difficulty with the (in)accuracy of GenAI outputs overall, what other difficulties—apart from inaccuracy—engineers encounter when using GenAI tools on the job, and how they recover from troublesome encounters. The paper ultimately concludes that while engineers frequently encounter various forms of “trouble" when using GenAI, their concerns about accuracy were secondary to other, more troublesome aspects of their engagement with this rapidly developing technology. Crucially, all these troublesome aspects are manifestations of the gap [20] between the general-purpose nature of these tools, intended to be used across scales and domains, and the particular context in which they are brought to bear on concrete engineering problems according to the unique conventions of specific organizations, specialties, and teams. Across every domain of AI development, accuracy metrics are (for better or worse [71, e.g.]) crucial to the task of demonstrating that an AI system has “learned" [89]. Indeed, reducing error for some known function is the key task of machine learning [49], and identifying that function is the key social practice that legitimizes the use of AI across domains [61]. However, the imperatives that drive GenAI development may vary from those that drive GenAI use. Apart from concerns about inaccuracy, engineers reported encountering a wide range of trouble when using GenAI tools. Here, and throughout this paper, “trouble" is taken to mean any difficulty or inconvenience users experience when interacting with a GenAI tool such that they A) must return to the interface and prompt the GenAI to refine its output, B) must edit or alter the output to make use of it, C) can articulate a way in which the output is suboptimal and could potentially be made better-suited to their needs, or D) resort to other, non-GenAI tools to achieve their goals. In this paper we work to identify forms of “trouble" where the existing literature might otherwise suggest we look for “socio-technical gaps", which Ackerman refers to as a "fundamental mismatch between what is required socially and what we can do technically" [20]. Whereas Ackerman places socio-technical gaps as a “necessary problematic" in cooperative computational work that can be approached from a number of directions (e.g. technical innovation, user education, infrastructural adjustment), we focus here on how these gaps emerge for users and users’ strategies for addressing such gaps. This stands in contrast to the type of sustained interrogation that Ackerman calls for in addressing gaps socially and technically. The eruptions of socio-technical gaps into users’ awareness are the objects of our analysis, and so we opt towards "staying with the trouble" [46] throughout our analysis, mapping how users recover from and repair trouble, within their interactions with the GenAI tools they use and across the existing workflows the use GenAI within. Here trouble is both a signal produced by human experiences and a property of the uneasy fit between the social and technical marshaled within the systems they use. As a consequence of this focus, we draw developers’ attention away from accuracy as the dominant source of trouble, and towards the mixture of pipelines, grounding techniques, and interfaces that places the majority of the burden in overcoming sociotechnical gap on those who use these systems. Below, these troublesome aspects are enumerated in detail, but they circulate around the challenges users face in ‘recovering’ from and ‘repairing’ interactions that stem from a fundamental mismatch between the general-purpose scope these tools are developed for and the far more constrained scope users’ tasks require. This paper makes four main contributions to the field(s) of computer-supported cooperative work: (1) Evidence that concerns engineers might otherwise be expected to have about the accuracy of GenAI is offset by their already existing work practices that prioritize accuracy and precision, which GenAI is gradually being incorporated into. (2) The introduction of trouble as an undesirable feature of GenAI interactions that is orthogonal to concerns about accuracy and inaccuracy, but instead captures the challenges that engineers (and other users of GenAI) encounter when incorporating GenAI into their work practices. This constitutes a novel contribution to the literature focusing on GenAI in workspaces. (3) A mapping of trouble as it arises through various aspects of GenAI systems, understood as sociotechnical systems with multiple locations for intervention to manage such trouble. (4) A set of recommendations for specific technical and social interventions that can reduce trouble for engineers employing GenAI on-the-job. Following a brief review of relevant literature and a explanation of the research methodology employed, this paper will discuss each of the above contributions in turn. The paper will conclude with a discussion of how the findings and recommendations presented here inform future research and provide valuable insights to build on, but extending its findings to other high-precision domains would require further comparative analysis of domain-specific tasks. . # 2 Literature Review Because the scope of this paper includes an in-depth analysis of both the technical details of GenAI systems and the implications of those details for a sociotechnical analysis of how GenAI figures in cooperative work, and because the intended audience of this paper includes those who might not be well-versed across technical and nontechnical domains, a review of both the technical and sociotechnical literature is provided here. # 2.1 Technical Approaches to Generative AI Generative AI, as the name suggests, is a particular set of artificial intelligence capabilities oriented toward generating seemingly novel outputs across several modalities including text, image, video, and audio. These capabilities differ from other AI capabilities, which are more oriented towards making inferences based on a more constrained set of data used to train models. Such inferences might be used to classify or rank data points that are not part of the original training data by, for example, classifying a image as containing a particular object or ranking the tone or ‘sentiment’ of a text passage as having some degree of ‘toxicity’ [80, 64]. For GenAI applications, these capabilities are combined and extended to produce outputs that are not present in the training data, and which satisfy a prompt provided by users. Typically, this is achieved by predicting which words, pixels, or frequencies are most likely to follow (or be situated near) one another based on a mapping of the user’s prompt to the vast corpus of training data [48]. This is most clearly illustrated by applications that use generative pretrained transformers (GPTs), which calculate the statistical probability of words and word-sequences that are most likely to follow each other given a prompt [118]1. Building on such capabilities, GPTs are now being augmented by techniques like retrieval-augmented generation (RAG) [60] to leverage existing documents that organizations produce, to facilitate the generation of organization-specific text, images, computer code, and other outputs. Increasingly, GenAI is becoming a key computational means through which teams working within professional organizations conduct their work [91]. While the capabilities demonstrated by GenAI tools have been impressive, significant concerns accompany their outputs. Text outputs can be correct or appropriate in their form while being incorrect or inaccurate in their meaning [53]. GenAI is trained on which words occur near each other in text scraped from the open web [97] and uses those statistical relationships to produce grammatically (and superficially) valid sentences and paragraphs, but it is not trained on the meaning or correctness of those materials. By way of illustration, the sentences "William I of the Netherlands became king of Holland in $1 8 0 6 "$ , "William I of the Netherlands became king of Holland in $1 8 0 7 "$ , and "King Louis Bonaparte became king of Holland in $1 8 0 6 "$ might be presented as outputs of a GenAI, are all grammatically valid and plausible to non-experts in 19th century European history. But these sentences are not all factually correct or accurate. This propensity of GenAI to produce plausible but inaccurate outputs has been referred to as “hallucination." Hallucinations can be exacerbated by the quality of data in the training corpora or the instructions (or “metaprompts") provided alongside users’ prompts instructing the systems to be helpful (and thereby to return an answer whether or not it can be algorithmically associated with a document in training data) [82]. None of this is to say that GenAI is incapable of returning factually correct answers. Numerous tools, tests, and benchmarks have been developed to improve and assess the degree of accuracy of which GenAI systems are capable [85, 127, 120, 84]. However, benchmarking how well a model produces a desired result does adequately capture the many usecases in which the ‘correct’ answer depends on the context of use. Some of these limitations of benchmarks have been pointed out [116], but in reviewing the literature for this project the preponderance of concerns about model accuracy appeared to overwhelm the much more salient organizational concerns authors had about how outputs could be tailored to specific usecases. These concerns have partially been addressed by RAG-based approaches provide relevant documentation alongside users’ prompts from which outputs can be more directly drawn and (importantly) provide links and citations to those documents that may contain accurate facts [83, 68, 60, 63]. These promising advances may greatly reduce, but do not eliminate inaccuracies in GenAI outputs. Furthermore, significant questions remain about how organizations can best incorporate the beneficial aspects of GenAI while minimizing disruption caused by errors and inaccuracies, while also ensuring that outputs are not just accurate, but also appropriate for specific usecases, even when those usecases are not previously anticipated by existing benchmarks. # 2.2 Sociotechnical Approaches to Generative AI Questions about how business practices might be inadvertently altered, or might need to be deliberately altered to accommodate GenAI, have recently received attention from researchers using a sociotechnical analytic lens. Attention to the relationship between emerging GenAI technologies and changes to organizations stem from what Wanda Orlikowski et al. refer to as the “metastructuring" effects of technology [14] and raise questions about how GenAI might be accommodated, or stymied, by organizational factors. Such insights from organizational sociology underline the importance of approaching GenAI systems as sociotechnical systems, i.e. as systems that have both social components and technical components, neither of which fully determines the functioning and impacts of the whole [16, 55, 62]. Addressing how workers collaborate, with each other and their tools, is crucial to understanding the impact of new computational tools in the workplace. Recent studies of GenAI in the workplace taking a sociotechnical or organizational approach have tended to use a structural lens. For example, these have included studies on how the introduction of GenAI can change roles on teams and the shape of team-based workflows [122, 111, 124], and organizational practices such as project management [99, 76]. Others have built on this approach to explore design implications for GenAI in software development [105]. Addressing how workers collaborate and the impact of new computational tools is particularly important for domains where accuracy and precision have historically been the primary values around which work is organized [7, 39, 51]. Software engineering has long been a focus of scholars examining relationships between individual engineers, wider organizational structures in which they are embedded, and the tools they use [56, 101, 65, 54, 50]. More recently, software engineering has been a site of intense focus for researchers examining how GenAI can be incorporated into work practices, particularly through the development of AI-powered software engineering ‘assistants’ like Microsoft Co-Pilot (built with OpenAI’s GPT technology in the backend), Anthropic’s Claude Sonnet, and Mistral’s Codestral. Such tools purport to handle a number of complex tasks. These include composing entire applications in any number of programming languages using only plain-language prompts describing a desired functionality, being able to explain what code does within an already-existing program, and aiding in many other software engineering tasks like writing subroutines and functions, debugging, code-commenting, and code optimization [78]. Science and technology studies (STS) discourse on accuracy, maintenance, and repair also has much to offer sociotechnical studies of GenAI. “Accuracy" has been treated variously within STS literatures as socially constructed through negotiated agreements about what constitutes accuracy [37], products of situated practices that put humans and non-human actors into relation in ways that give meaning to concepts like accuracy and precision [4, 19, 31], and as the result of situated decisions that enact epistemological, ontological, and ethical imperatives [34]. “Maintenance and repair" refer in part to behaviors that people engage in to make tools, tasks, and obligation “fit" together in what would be recognized as a workflow, originally referred to by Anselm Strauss as "articulation" labor [5]. The concept of “repair" specifically is attributed to Steven Jackson, in his chapter “Rethinking Repair" [40]. He argues that it is the spontaneous, imaginative labor of human beings “fixing" broken technological systems, to the extent that they can be made “useful", which is truly responsible for any resilience and value which these systems manage to deliver to people. These efforts also echo what Schmidt and Bannon (1992) argue is a central concern to the field of human-computer interaction in understanding the "articulation work" that happens when workers engage in cooperative efforts to re-situate multifaceted complex tasks with different technologies within specific contexts, which they refer to as local “work environments" [11]. The onset of GPT-based tools, combined with managerial encouragement to use these tools, creates an apt environment in which to examine both how locally specific notions of “accuracy" diverge from more global concerns about GenAI tools’ performance with respect to specific benchmarks, as well as contextually situated repair practices: how do people “fix" the errors, hallucinations, and confabulations of GenAI output so that such outputs can be truly useful? Integrated circuit (IC) design and manufacturing, the domain of hardware and software engineers interviewed for this paper, is one such area in which GenAI has been suggested to have tangible impact [86]. In IC design, hardware and software engineers working across multiple teams must precisely describe, analyze, and verify designs using code-like programming languages such as SystemVerilog [33] as well as custom software applications and scripting programs [1]. These tasks have been identified as compelling sites for GenAI application [93, 74, 103, 88], and engineers have been increasingly encouraged to make use of such tools for greater productivity and output quality [87, 79]. This encouragement has been even more forcefully directed at software engineers, and recent work has only begun to assess the impacts of GenAI on hardware and software engineering [107, 119, 93, 117]. In the remainder of the paper, we will discuss our effort to evaluate how early adopters of GenAI for IC design interact with these tools. # 3 Methodology We conducted a series of qualitative interviews among employees of a large, multi-national IC design and manufacturing firm. One series of interviews included hardware and software engineers who were documented as intensive users of GenAI tools, defined as using internally-developed GenAI tools to output more than 500,000 tokens over the 30 days prior to selection for inclusion. Other inclusion criteria were that interviewees were based in the Western Hemisphere, and both hardware and software engineers are sampled across a wide range of job roles (e.g., design verification, silicon architect, physical design, electronic design automation (EDA) tool development, enterprise application development, firmware architecture). Interviews explicitly addressed employees’ experiences using internally-developed GenAI tools. These tools functioned similarly to publicly available GenAI chatbot tools, using GPT technology on the backend, but run on secure servers ensuring proprietary information can be included in users’ prompts without risking data leakage outside the company. These tools also included additional company-specific safety features and useage limitations, and some of the tools incorporated retrieval augmented generation (RAG) [60], giving users access to company-specific documentation as part of their interactions.2 A qualitative interview methodology was employed [38] that emphasized a semi-structured interview protocol [36]. Qualitative interviews are ideally suited to gather data about the set of social practices that shape the effects of sociotechnical systems [41, 45, 58, 52], and are therefore about the concerns engineers across job roles might have about accuracy when using GenAI tools. Identifying concerns about accuracy, for example, requires understanding how accuracy and precision are produced by engineers within their social setting, using the tools at their disposal, and subject to “rituals of verification" [17]. Understanding the forms of “trouble" engineers encounter when using GenAI requires understanding what their goals and reward systems are, how those goals are set and met, and who they work with to achieve those goals. And mapping such troubles to concrete locations within sociotechnical systems, and making recommendations to repair such trouble, requires understanding the individual, embodied experience of interacting with such systems and the particular moments in which trouble erupts, and “staying with" it [43]—in this case in the form of open-ended interview protocols that prioritize extemporaneous follow-ups. The interview protocol is included as Appendix A. A total of 17 interviews were conducted with hardware $\mathrm { ( n = 1 0 ) }$ ) and software $( n = 7 )$ ) engineers, all of whom had earned advanced degrees in computer science or electrical engineering and had been in their current job roles for at least one year. Given the relatively recent emergence of generative AI tools, and especially the fact that the internally-developed GenAI tools had only been available for approximately six months prior to the study, all participants had comparable degrees of experience using GenAI on the job. The series of interviews was reviewed by an internal risk and compliance team, which recommended and approved privacy measures to protect interviewees. Interviews were recorded via videoconference software, which also produced automated transcripts of the interview. Recordings and transcripts of each interview were downloaded immediately after each interview and moved to a secure server for storage. Transcripts were post-processed using a custom python script to convert the transcript from a phrase-based format to a turn-based format.3 Transcripts were first coded openly, using MaxQDA4 and then grouped thematically [30] according to themes that emerged within and across the categories of analysis. The thematic groups “trouble" and “repair" (Table 2) are the subject of the greater balance of the analysis presented here, but additional thematic groups included as well as common use cases and “judgments" participants passed on the tools and their performance (Table 1), which the interview protocol was intended to illuminate [22]. # 4 Encountering GenAI On-the-Job Engineers were interviewed about their use of several GenAI tools developed internally for use by employees. In these interviews, accuracy played a smaller role than expected in matters of concern raised by engineers, however a category of concerns needing repair and covery [11] emerged from these interviews, which we label as trouble. Trouble, here is defined as anything needing repair and recovery. Inaccuracies can be seen as a form of trouble, but is treated here as a separate or special category of analysis here because of the role questions about accuracy played in the study design (discussed above). The tools used by engineers took the form of conversational agents that used a chatbot-style interface [69]. Each of these tools were adaptations of commercial-grade foundation models that ensured proprietary information entered into prompts would not leave corporate-leased protected enclaves of vendors’ servers or leak to foundation model vendors. Each of these tools were also supplemented with metaprompts [123] to facilitate the use of the tools by various user roles across the company. These tools were also all being used in a testing mode, in which feedback mechanisms (thumbs up/down, star ratings, and free response commenting) within the user interface (UI) enabled constant refinement by the corporate IT team throughout the study period. Some of these tools used RAG to augment data in foundation models with company- or project-specific information. Engineers used these GenAI tools for a number of tasks, not all of which were directly related to hardware and software engineering. Additionally, some tools allowed users to supplement their prompts with files (e.g., specification documents, test cases, product manuals) that could then be interrogated or used as the basis for generated responses. The most common engineering use cases (see Table 1) served as alternatives to information search (e.g., about a software command or programming language syntax, general information about a specific domain, or to demystify a specific error message) and for the generation of code snippets (e.g., “Write a function that iterates over elements in a tuple"). Other prominent use cases included the generation of scripts for use in a terminal window, summarization or explanation of code, code optimization, and generating documentation to accompany engineering work products. Non-engineering-specific tasks included the generation of meeting summaries from transcripts (.vtt files), drafting emails, changing the tone of emails, assisting in conducting HR tasks like self-evaluations, and translating between languages for communicating with teammates abroad. The trouble that engineers encountered through using GenAI for these purposes is discussed below. # 4.1 Accuracy and Inaccuracy This research proceeded from a hypothesis that the accuracy of GenAI tools would be a primary concern for engineers using such tools to produce software and design integrated circuits. In interviews, engineers generally demonstrated unambiguous understandings of what “accuracy" means to them, within their professional practice. They described accuracy as a property of GenAI tools and their outputs, without questioning, critiquing, or otherwise problematizing the concept of accuracy, which simplified discussions about the importance accuracy held in their use of GenAI tools for integrated circuit development. However, in these same interviews, engineers’ concerns about other issues eclipsed those about accuracy, without exception. While several $\scriptstyle ( \mathrm { n = 7 } )$ ) engineers expressed opinions about the accuracy or inaccuracy of GenAI in the engineering domain, none saw it as a barrier to using the tools or as a reason not to use the tools for engineering tasks. In most instances, concerns about accuracy are orthogonal to the goals they pursue when they use GenAI on the job. One hardware engineer with four years of experience within the company stated, when asked how important the accuracy of GenAI tools is, “I think it’s critical. I know AI does hallucinate, but you know that’s where I’m having to use my own brain with it, which is, you know, I’m using it as a tool for improvement." This ambivalence toward accuracy stands in stark contrast to the role accuracy plays in developing GenAI tools, where it is a central focus of benchmarking. Table 1. Usecases and Judgements In contrast, hardware and software engineering disciplines have developed a range of tools and practices for ensuring the reliability of software and hardware, which renders concerns about the ‘accuracy’ of any one piece of code or component of an integrated circuit somewhat moot. For software engineers, reliability engineering is a robust practice with a vast array of techniques to produce dependability and ensure that “latent" and “dormant" errors can be identified throughout the software development lifecycle [15, 32] through robust code review, unit testing, and many other practices. Similarly, in hardware engineering and IC design, system validation and verification [35, 13] represents more than $5 0 \%$ of the overall development effort [27] and these tools, deployed continuously over the development lifecycle, regularly catch errors and inaccuracies in designs. Therefore, it is not GenAI that hardware and software engineers need accuracy from, it is the overall sociotechnical system—the checks and rechecks, the documentation practices, and the testing systems—around the engineers that needs to be oriented toward accuracy, precision, reliability, and dependability. With these systems in place, engineers’ use of GenAI succeeds when outputs are constrained or shaped such that they are “good enough" [102] that these systems can be brought to bear, and any ‘inaccuracies’ are superficial and easily addressed. In turn, interviewees largely considered the generation of large swaths of code or an entire applications to be an inappropriate and largely futile use, as it thwarted or rendered ineffective their existing social practices oriented toward accuracy, precision, and dependability (See Section 4.3). In this context, the lack of concern engineers evinced in interviews does not read as a null finding. Instead, it indicates that barriers to engineers’ GenAI adoption are not contingent on the perfection of GenAI system performance as measured in common accuracy metrics [85]. Crucially, hardware and software engineers experience distinct forms of “trouble" in using GenAI for their on-the-job tasks, apart from inaccuracy. It is to these forms of trouble we now turn; identifying and mitigating trouble emerges as a critical vector in the development of GenAI for engineering tasks. # 4.2 Trouble The most common form of trouble reported (by $\scriptstyle \mathtt { n } = 1 1$ interviewees) was how a GenAI system made use of documents supplied to it, either directly in a prompt or as part of a RAG implementation. For example, interviewees described how the GenAI would reply with the contents of adjacent cells from the one that ‘should’ have been the subject of a response, with a response drawn from a footnote rather than the more authoritative text from the main body of a text. The tool could not use the ‘context clues’ provided by the layout of a table to make valid inferences about its content. The second most common trouble reported $( \mathrm { n } { = } 9 )$ ) was from GenAI responses that were “too generic" to be useful: for example responses described general steps one might take to solve a problem rather than provide the solution asked for in context. A third major category of trouble reported by users $\scriptstyle ( \mathrm { n = 6 }$ ) was difficulty with numerical operations, returning arbitrary responses when asked to convert hexadecimal notation to binary, for example, to count the number of bits in a binary string, or to perform seemingly-straightforward arithmetic or algebraic operations. Other clusters of trouble reported by $\mathbf { n } > 3$ interviewees included explicit references to “hallucination" $\scriptstyle ( \mathrm { n } = 5 )$ ) and “tone" $\scriptstyle ( \mathrm { n = 4 } )$ ). Hallucination trouble mapped onto the broader literature on AI hallucinations, in which the tool provided an output that seemed appropriate to the context, but lacked some other form of validity [82]. “Tone" trouble actually covered a wide gamut of interviewee complaints, from overall reports of GenAI’s tone being cloying, lugubrious, or overly eager-to-please, to being simply inappropriate for the context (e.g. an upbeat tone for a report of bad news, or an overly casual tone for a company-wide email or report to a superior). Additionally, single interviewees reported trouble arising from how the user interface displayed outputs (e.g., line-by-line, word-by word, or in large text blocks that filled the screen), how conversations with a GenAI-powered chatbot became ‘canalized’ or went “down a rabbit hole" in hot pursuit of an incorrect solution path, how a GenAI refused a reasonable prompt (in the eyes of the interviewee) and instead explained that providing a response would be somehow unsafe, and how often a GenAI tool would provide a response that was bafflingly unaware of the context in which a prompt was given (even when the user was explicit in providing context). Additional instances of trouble are included in Table 2 below. Table 2. Trouble and Repair A majority of these types of trouble have to do with a perceived failure of the GenAI tool to provide a contextually appropriate output that is immediately useful to users. Outputs that are “too generic", that misrecognize terms of art or company-specific language, ‘hallucinations’, mismatches of tone, inconsistent responses, etc. stem from a mismatch between what users expect and what they get from a GenAI tool. Users’ expectations are shaped by the context in which they are working, e.g. the version of the programming language they are using, stylistic conventions that characterize their company’s codebase, the social norms of how to communicate up or down the managerial chain within their organization, and so on. When a GenAI tool outputs fail to account for—or appear not to account for—that context, it falls to users of these tools to repair that context, either by returning to the tool and adjusting their request or editing the output directly, to recover something useful from the frayed threads of a troubled interaction. Where the trouble consists of straightforward errors or inaccuracies, engineers rely on existing work practices to repair the faulty outputs of GenAI tools. For both error repair and context repair, the effort that comprises repair work represents an externality that needs to be measured and mitigated as part of any claims to GenAI tool effectiveness or productivity improvement. # 4.3 Repair and Recovery As has been discussed above, engineers encountered a range of trouble in their use of GenAI, with inaccuracy seldom reported as a dominant form of trouble. Why these things were experienced as trouble had to do with the need to conduct repair work, which, in various ways amounted to attempts at controlling context. Engineers described a set of practices already common in engineering that they use in tandem with (or as an enveloping set of practices around) GenAI. These practices are longstanding techniques used to forestall, repair, and recover from the common (and frequent) errors and misunderstandings that arise in everyday engineering work, developed quite apart from GenAI but inseparable from engineering practice with or without GenAI. While varying in their particulars, engineers broadly described a set of practices that can be described as 1) Atomizing the Work, 2) Iteration, 3) Making Explicit, and 4) the subsequent use of concrete software and hardware Organizational Workflows. Notably, engineers describe the utility of many of these practices for repairing and recovering from non-accuracy related concerns as well. 4.3.1 Organizational Workflows. Organizational workflows refer to the expected set of pre-existing practices and organizational forms in which engineers’ use of GenAI is embedded. These practices have long given order to the social milieux of software and hardware engineering [18], both as professions and as specialized teams within specific organizations. The outputs of GenAI—like the outputs of individual contributors—are subject to practices like code review, visual inspection by engineers who have developed difficult-to-articulate heuristics for ‘what looks right or wrong’, and unit tests. These tools and practices have been used for decades to manage work, identify and correct errors in code, and navigate the professional demands of engineering. Here, they are the bulwark of confidence engineers relied on when they expressed little concern about the accuracy of GenAI tools; no element of engineering is error-free but, the interlocking practices of engineering are relied upon to sift out, at an organizational level, errors that arise within any single tool, practice, or person. With this bulwark in place, it is not the prospect of putting bugs into production that troubles engineers, as much as the halting, repetitive, off-target, or overly generic interactions with GenAI tools needed to produce outputs that can then be added to their work along the way to code review or unit testing. 4.3.2 Atomizing the Work. Atomizing the Work refers to the practice of reducing complex, multi-step, or cumbersome tasks into smaller, more manageable pieces. In engineering more broadly, large tasks are commonly decomposed to allow work to proceed in parallel [23] or to execute complex tasks piece by piece [42, 8]. For GenAI, engineers described how, in the words of one hardware engineer, “rather than ask for complete code" they are more likely to "work on one function at a time or so. Let’s work on one function, put that in my IDE [Integrated Development Environment] and then go from there and work on another one." Another, slightly less-experienced hardware engineer stated that “I definitely have been trying to give it less large sections of code... The more information you give it, the more likely it is to do something chaotic or go a little crazy. If you keep it at 10, 15, 20 lines of code that are pretty concise and have a concise function it does a really good job, but if you start giving it 100 lines and saying, hey, can you make this one change? It can start going off the rails a little bit.” For some, this practice may be couched in an implicit understanding of the GenAI tool’s overall capabilities, such that “when I upload the code and I asked it to explain things, there are times where it doesn’t understand my question, but I’m able to split it up in multiple parts so it’s easier to digest that,” as one hardware architect put it. But it also may be strategic, to forestall the potential for error that always exists in collaborative endeavors; the same engineer described this as trying “to be very compartmentalized, so I would paste small functions, so I would be able to gauge and test if it’s actually doing the correct thing” in the context of the longer piece of code. For this engineer, atomizing the work in this way had become a regular part of their coding practice, to ensure they had oversight for every line of code they used, and to limit the risk of an unnoticed error slipping through their workflow. Another described this as a means of control: “I don’t want it to write the project, I just want it to write the different functions and small components of the project. I still maintain control of the overall project, so I think it limits the ... exposure to risk." 4.3.3 Iteration. Iteration refers to the practice of refining a task or an output over successive versions until it reaches a desired end state. Iteration is common in software engineering, and is an intrinsic component of the popular agile software development methodology [28], but also has long featured in engineering more broadly as a way to elicit stakeholder input and refine engineering specifications [12]. When using GenAI, an engineer might refine a prompt given to a GenAI iteratively, given the output, until reaching a desired goal.“I’ve learned that you can continue to massage your ask to a point where it gets you better results." Rather than refine specifications, engineers report using far more linguistic cues (as opposed to tweaking actual parameters that produce an output): “There is definitely some art crafting your prompt to make it do what you want. You can even tell it to prioritize certain things, or respond back saying, ‘hey, that’s great but I didn’t like this part. Can you fix that?’ and it will go back and rework the code.” Following the software engineering principal of modularity [21], iteration is sometimes used in conjunction with Atomizing the Work. One software engineer who has been working on data and databases for more than twenty years said, $^ { * } \mathrm { T }$ basically iterate, put it in, test it out, then go back, and if everything is up to that point, what I’m expecting. Then we go on to the next step.” But iteration can also be used by engineers to generate novel approaches they might not have landed on by themselves, describing it as “more of learning that process of, if I do it this way, what does it generate? Does it generate what I expect it to? So OK, let me reword it a little bit and let me specifically tell it.” 4.3.4 Making Explicit. Making explicit refers to the practice of carefully and explicitly manipulating the information provided to a GenAI tool in a prompt to achieve a desired output. This can consist of “prompt engineering" [77], the practice of rewriting a prompt in response to an output that needs repair, or the deployment of a carefully honed and tested prompt that has been demonstrated to produce contextually appropriate results. This involves explicitly including contextual details needed for an appropriate output. For example, an engineer might ‘enroll’ a GenAI tool as part of a prompt by telling it that it is an integrated circuit design expert, or producing an output for a specific audience. An engineer might also include code examples exemplifying the style conventions of a team or project, to be emulated by the GenAI tool in its output. It can also consist of pointing the GenAI tool to documents that act as a context for the response. In this sense, RAG is a way to make context explicit, as are many of the affordances of AI coding assistants that treat files that are open or flagged within an IDE as relevant to a prompt or AI-generated code completion. One reason that making context explicit is crucial for engineers is that GenAI tools draw on GPTs that—while supplemented with engineering-specific data—were designed as general-purpose tools that could conceivably answer any question on any topic [70]. From the tool’s “point of view," plausible responses to prompts could conceivably draw on any realm of human discourse, unless they are explicitly instructed not to (through development techniques that will be discussed below). Therefore, engineers have found that orienting their prompt within a more constrained domain of knowledge prevents ‘misunderstandings’ or inapt responses. ,In the words of one systems validation software engineer, “it needs the context, right? So if you provide the context and then you start asking questions, then it’s more specific to what you’re asking.” They report going so far as to explicitly declare the professional role the GenAI tool ought to ‘play’, the role they themselves occupied, and what was expected of the tool: “When I first started writing prompts for [their employer’s internal GPT chatbot], I was extremely specific. I would say I am a software developer and a hardware validator and I would like you to create a function that would do $x$ constrained by $y$ and I would make my prompt as detailed and informative as I could possibly do it and it was very good," said one front-end software engineer. This particular engineer continued with a more macroscopic "folk" understanding [57] of how making things explicit might work, saying “but what I realized is I don’t have to do that on every single prompt and that’s why I have so many conversations here with [this GenAI tool]. Once I make that initial prompt, it seems to maintain that initial understanding through all my subsequent prompts.” Many publicly available GenAI tools allow conversational threads of conversation to persist, while others provide affordances to ‘reset’ the conversation. Attempts to control the context of that conversation, whether new or old, are crucial for engineers’ management of expectations around GenAI outputs. # 5 Mapping Trouble The trouble engineers experience (Table 2) is not a case of user error or evidence of the need for more training in how to use such tools [109]. Rather, it is a way of experiencing Ackerman’s socio-technical gap. The gap between design choices and engineers’ goals puts the onus on those engineers to set and reset, steer and re-steer, towards the context they want the tools to work within. This onus is perhaps unsurprising given the prioritization of generalizability and scale by developers of GenAI systems [110]. However, locating where those gaps manifest specifically in design choices, and mapping those choices onto trouble as experienced by users, is complicated by the complexity of GenAI tools and the multiplicity of components that are assembled to comprise a user-facing product. This section uses the forms of trouble discussed above as instructive signposts toward which system components can be targeted, to A) reduce the trouble engineers experience and B) support the practices of repair and recovery they employ. Minimizing trouble and supporting repair cannot happen at the overall system level. Instead, reparative practices must target specific components of the GenAI assemblage. Trouble can be associated with multiple elements because the elements of any sociotechnical system interact and mutually shape each other [16]. It should be noted, however, that this mapping is specific to integrated circuit designers and the GenAI systems they were using at the time of this study. A similarly designed study of other contexts and domains (e.g., medical professionals using a hospital-developed system) might result in a different mapping. It is hoped there would be rough correspondences of trouble and elements across multiple domains, so that some design implications can be generalized. Moss et al. Table 3. Mapping trouble onto elements. # 5.1 Pipeline GenAI tools are produced through a pipeline that connects various computational techniques to produce the tool itself. This pipeline ingests text—for language models this consists of text and tabular data culled from documents, but large multimodal models (LxMs) might also include images, audio, and video—and results in a model that can transform text-based prompts from users into text outputs (or multimedia outputs for LxMs). The pipeline, then, consists of all the steps needed to move from text to model. This includes the datasets and other training corpora that provide the text used to train the model, but also the protocols used to clean and filter those corpora [98]. It also includes the algorithms used to produce statistical models for language in the training data: the zero-shot learning techniques, the transformer architecture, the parameters used to minimize error according to specific objectives, and many other computational techniques [118]. Specifically, the ability to parse documents depends on how inputs are ‘tokenized’ and whether tabular data is accounted for by a ‘tokenizer’. Elsewhere in the pipeline, how polysemous words are embedded within a language model is shaped by their frequency in training datasets, and the frequency with which alternate meanings of such words appear. Cutoff dates for training data also affect whether outputs are current or outdated. The tone of outputs is, in part, affected by designer-supplied metaprompts (such as instructions to ‘be helpful’) that accompany user inputs and shape outputs, and in part by how user preferences about tone are anticipated by specific design mechanisms [67]. The pipeline also includes techniques for fine-tuning a GenAI tool and tailoring the range of possible outputs to meet the needs of developers. Fine tuning can consist of supplementing the model with additional domain-specific data, for example making a model more suitable for use in solving problems in organic chemistry [125]. But models are also trained to meet social considerations of what a ‘good’, ‘suitable’, or ‘safe’ output might look like. To achieve this, a reinforcement learning model might be trained to match the enacted preferences of developers, online task-workers, or other communities [67], or given explicit rules to follow via a ‘constitution’ that ‘governs’ the tool’s behavior [66]. Tokenization, fine-tuning, and reinforcement learning represent design choices about how generalizable a model is, collapsing multiple context-specific meanings into single vectors, bounding the context in which particular outputs might be generated in ways that vary from specific contexts of use, or ensuring potential harms are limited across a majority of use cases even if doing so renders a tool useless in specific use cases where such harms are unrealistic.5 Increasingly, GenAI tools are being customized within organizations by augmenting the tool with additional data through RAG, which uses such documents to retrieve information that the orginal GPT would not have had access to. Using RAG to augment a GenAI tool promises to make it more responsive for tasks that relate to the context of the organization. However, the fit between A) the documents provided as part of a RAG implementation and B) the tasks the tool is intended to accomplish is not always straightforward, nor is it always readily apparent to end users. Additionally, organizations can contain multitudes, and local contexts within teams or branches may differ significantly from that of the broader organization-wide context. All of these are components of the development pipeline that produces a GenAI tool, and each can present a source of trouble for engineers using such tools. # 5.2 Features For the entire class of GenAI tools discussed here, the model produced through the pipeline described above is made accessible to end users through a front-end interface that shapes their experiences. This front-end interface may introduce trouble. For example, some engineers interviewed in this study reported trouble arising from the ways outputs were displayed on-screen, particularly when the GenAI tool emulated a typewriter by displaying results one letter or word at a time. The pace of this front-display frustrated engineers (particularly those who reported an awareness that hardware latency issues were not to blame for this pace, and that it seemed to them like a deliberate choice made by product designers). Conversely, engineers using another tool expressed frustration when massive blocks of text were displayed that exceeded a single screen’s capacity. Relatedly, GenAI tools may attach “metaprompts" [123] written by tool developers to users’ prompts. These metaprompts are text instructions that users do not see, that are passed to the GPT model alongside the user prompt, and may include instructions to ‘be helpful’, ‘don’t swear or display toxic behavior’, or even ‘explain your reasoning at an $8 ^ { t h }$ grade reading level’. However, these instructions may sometimes be at odds with the intentions of users. An engineer attempting to draft an email to a scientific research team, for example, might have difficulty prompting the tool to produce text at a sufficient level of complexity if an unseen, undisclosed metaprompt instruction to produce text written at a grade-school level is always attached to that engineer’s best efforts. Such a feature is an (invisible) aspect of the context of the interaction with the GenAI tool that potentially produces trouble for the user. Other features wrapped around a GPT model can also cause trouble for engineers. These might include restrictions on the size of the text entry box for user prompts,6 the types of files that can be appended to a user’s prompt, functionality related to the threading of input-output chains (i.e., conversations) and how a specific tool allows for the restarting of prompt chains. Additionally, metaprompts to ‘be helpful’ dovetail with the propensity of GenAI tools to provide an answer, any answer, whether there is relevant data in the training set or not. This has been suggested as a key source of hallucinations in GenAI outputs [82], but also produces trouble for engineers who find value in knowing about data voids so that they can resort to other methods for accomplishing a task. # 5.3 Grounding Another element of GenAI’s sociotechnical system are the semiotic worlds in which the text (and other media) making up GenAI tools’ training data, inputs, and outputs, are grounded. Texts, and the tokens that comprise them, are generally arbitrary symbols that are ultimately meaningless unless grounded in the social worlds in which they are produced, hold meaning, and are interpreted [6, 9]. Through a range of clever and effective NLP innovations [73] transformer models produce plausible and useful outputs, and so have managed to largely skirt the grounding problem. However, there is an asymptotic relationship between the capacity of GPTs to emulate semantically appropriate language and full linguistic competence [100]. From a positivist perspective, this is because, as Banerjee et al. point out, simple prompts can always be ambiguous,8 and training data are always necessarily incomplete and already out-of-date [100]. From a phenomenological perspective, however, this is because semantic meaning is not objective data that stands apart from human practices of sensemaking, which negotiate ambiguity and produce meaning through context-laden interactions [59]. Put simply, and uniting both these perspectives, GenAI tools present trouble to engineers because the world is not and cannot be fully represented linguistically, and large language models can perform no better at this task than language itself. However, design affordances that increase the amount or specificity of context with which GenAI tools operate can dramatically resolve this ambiguity, as contextual cues are precisely what allow humans to resolve ambiguity when they interact with each other [29, 24, 2]. # 6 Remediating Trouble While associating forms of trouble with elements of a GenAI tool (see Table 3) is insufficient for remediating trouble, or for supporting users in addressing trouble on their own, it does point out where interventions may be useful. Generally speaking, responsibility for trouble associated with pipeline elements falls on those who make choices about that pipeline. This points to people who make choices about the training, deployment, and fine-tuning of foundation models. Responsibility for trouble associated with feature elements is more difficult to assign. Choices about some model features are made by the same parties, e.g. vendors, that develop the overall pipeline. Choices about other model features might be made by downstream customers or consumers of vendors’ foundation models. Nevertheless, a range of interventions can be recommended to reduce trouble for engineers employing GenAI on-the-job. To make these interventions successful requires understanding where to apply leverage. Understanding GenAI tools as sociotechnical systems helps identify not only the elements of GenAI systems that produce trouble (see Section 5) but also to identify where interventions can be made on behalf of engineers, within an organization that develops and deploys GenAI tools for engineers to use. For the GenAI systems analyzed as part of this study, which used vendor-supplied foundation models which were built upon by developers within the same organization as their users, the opportunities for remediating trouble are fairly broad. However, only parts of the pipeline described are subject to direct intervention from within the organization. The foundation model itself, its training data, how it was tokenized, and many other aspects of fine tuning are “black boxed" within the model weights taken up by the organization and built upon [126], although workarounds for this limitation have been proposed [129]. # 6.1 Organizational Interventions Some interventions in the trouble engineers experience have to do with how work is distributed and organized. Specifically, if the ability of a RAG-based tool to parse documents causes trouble or if the tool has difficulty parsing documents provided as prompt inputs, it may behoove an organization to adapt its documentation practices to interface more reliably with the way GenAI tools parse such documents. Within the organization this research addressed, industry- and organization-specific documentation of technical specifications plays a very important role in hardware and software engineering, which presents an opportunity to design GenAI pipeline components that can better parse internal documentation, or to revisit documentation practices to be more easily parsed by existing GenAI pipelines. Depending on the organization, this may require a target study, trial-and-error, and/or a thorough review of how changing documentation may alter other business processes. Another possible organizational intervention is to ensure that engineering practices oriented towards accuracy and precision, which reduce the overall risk of inaccurate outputs from GenAI tools (see Section 4.1), do not erode as GenAI tools gain broader adoption. A significant implication of the research presented above is that code review, pair programming, unit testing, etc. retain their importance and may even be more important as GenAI use grows. Additionally, organizations need to ensure novice engineers have pathways to gain the professional experience needed effectively supervise GenAI tools. Nearly half of engineers $( \boldsymbol { \mathrm { n } } = \boldsymbol { 8 } )$ ) interviewed as part of this research project, all of whom were intensive users of GenAI tools and mid-career engineers or more senior, reported a lack of confidence that a novice engineer would be effective in identifying inaccurate outputs from GenAI tools. Additionally, these engineers frequently spoke of the need for engineering experience to be able to guide a chat-based interaction with GenAI tools toward a productive goal, saying “I’m using my experience to look at the output and [ask myself] ‘Hey, does that does that make sense?’" For both documentation and professionalization, organizations must inventory their own processes and needs to identify the best ways to adapt to the use of GenAI tools and to maintain the growth of expertise and professionalization for engineers. # 6.2 Transparency Interventions A significant portion of trouble reported by engineers had to do with their difficulty in understanding and anticipating the exhibited behavior of the GenAI tools they were using. Engineers consistently were surprised and disappointed by the tools’ responses to what seemed like straightforward prompts. In these instances, they reported a great deal of trouble steering tools toward more useful outputs. As reported above (Section 4.3.4), controlling context is a key practice engineers use to repair and recover from troublesome GenAI tool outputs. Their attempts to control context, however, are often guided by intuition and guesswork more than an explicit awareness of the assumptions made by the GenAI tool about the context within which it is working. To aid users, developers and deployers of GenAI tools (at both the Pipeline and Feature levels), could support engineers by providing transparency at multiple points. Engineers could benefit from seeing information about the metaprompting that accompanies their prompts, the features of a prompt chain that persist from one interaction to the next, and feedback on the amount of uncertainty latent in both users’ prompts and tools’ responses. Internally, the organization addressed by this study has implemented a “prompt store" to share useful prompts, but could also internally document metaprompts that shape system behavior for all users. Such transparency would need to go beyond existing proposals for greater transparency into foundation models [104] and include other components and features of GenAI tools as well. A full exposition on uncertainty in generative AI tools is beyond the scope of this paper, but uncertainty and stochasticity is a fundamental feature of GenAI, and transparency about the level of uncertainty that pertains to specific interactions can aid users in understanding how to steer outputs toward desired ends. This is supported by early research on the calculation and communication of uncertainty [114, 106, 81, 72, 115], as well as by data collected for this study that highlights the trouble specifically associated with variability and uncertainty. # 6.3 Technical Interventions Technical interventions, too, are important for reducing the trouble experienced by engineers who use GenAI. The troubles outlined above (Section 4.2) point toward specific interventions which go beyond the purely technical advances often associated with the development of GenAI. One intervention is into how documents are parsed for RAG implementations and as inputs to user prompts. This is the opposite side of the coin discussed in Section 6.1. Here, thinking about RAG functionality in a different way, as providing additional context clues rather than just relevant grounded data, can be especially productive. The documents that are included in a RAG implementation provide context for the interaction between engineer and GenAI tool. It’s true that most RAG setups produce a tool that has access to an organization’s data, resulting in a context of use that is narrower than a general-purpose GenAI tool—but most are still very broad and could benefit from a more deliberate pursuit of contextual clues. Finer-grained context control would enable individual engineers or smaller teams to more narrowly circumscribe the documents they want to be ‘in context’ for a set of tasks they wish to accomplish with GenAI. Another intervention is the technical work needed to efficiently calculate and communicate uncertainty as a property of GenAI inputs and outputs, as discussed above (Section 6.2). If a GenAI tool could provide information about which elements of an output came with higher uncertainty, this could potentially show users where more context could help. This could support and hasten the repair of context. Further work is needed to demonstrate this possibility. Additionally, illustrating the uncertainty a GenAI tool has about how to parse a user’s input might aid in the repair of context through real-time prompt rewriting. Such calculations are a non-trivial challenge to produce and use, although early research shows some promise that uncertainty estimation is possible and useful [113, 96]. A third, and little discussed intervention, is into how reinforcement models shift outputs toward the exhibited preferences of users through a process called reinforcement learning from human feedback (RLHF). Most commonly applied to foundation models within an overall model development pipeline, reinforcement models learn a reward function from human feedback [67]. This feedback is procured through online taskworker platforms. These tasks are oriented toward collecting generic feedback on whether taskworkers prefer one of a pair of ouputs produced in response to a wide range of prompts. These prompts must be quite general for developers to achieve any significant degree of coverage across the vast range of possible use cases their foundation models are developed to satisfy. This leaves a significant amount of latent space between domains, and it is unlikely that any single topic—particularly integrated circuit design—has been adequately exposed to human feedback from expert engineers to support this stage of the development pipeline. The ability to do RLHF within specific professional contexts is potentially a key development that would help repair work within specific contexts of use more seamless and less troublesome for users, particularly engineers. That said, additional work is needed to assess the extent to which the trouble experienced by engineers can be attributed to RLHF and how RLHF practices can be employed to mitigate trouble for engineers using GenAI tools.
Generative AI tools have become more prevalent in engineering workflows, particularly through chatbots and code assistants. As the perceived accuracy of these tools improves, questions arise about whether and how those who work in high-precision domains might maintain vigilance for errors, and what other aspects of using such tools might trouble their work. This paper analyzes interviews with hardware and software engineers, and their collaborators, who work in integrated circuit design to identify the role accuracy plays in their use of generative AI tools and what other forms of trouble they face in using such tools. The paper inventories these forms of trouble, which are then mapped to elements of generative AI systems, to conclude that controlling the context of interactions between engineers and the generative AI tools is one of the largest challenges they face. The paper concludes with recommendations for mitigating this form of trouble by increasing the ability to control context interactively.
[ "cs.HC", "cs.AI" ]
# 1 Introduction A healthcare conversational system is a dialoguebased framework specifically developed for the medical domain. Its primary purpose is to interact with patients, systematically collect supplementary symptom information, facilitate preliminary diagnostic processes, and provide automated recommendations for treatment plans (Tang, 2016; Wei et al., 2018; Liao et al., 2022; Zhong et al., 2023). Healthcare conversational systems demonstrate significant potential to enhance the efficiency of diagnostic procedures while reducing the costs associated with patient information collection (Chen et al., 2023a; Wang et al., 2023b). In recent years, large language models (LLMs), e.g. ChatGPT (OpenAI, 2023), LLaMA (Touvron et al., 2023a,b), Qwen (Yang et al., 2024a) have been widely applied in medical fields and catalyzed substantial technological transformations in dialogue systems. Through high-quality instruction fine-tuning and reinforcement learning based on human feedback (RLHF) (Ouyang et al., 2022), large language models (LLMs) possess the ability to generate human-like responses with exceptional accuracy, thereby redefining the foundational framework of healthcare dialogue systems. While large language models fine-tuned for medical dialogues have demonstrated the ability to produce knowledge-intensive and contextually appropriate responses (Wu et al., 2023; Han et al., 2023; Chen et al., 2023b; Li et al., 2023b; Singhal et al., 2022, 2023; Toma et al., 2023), a discrepancy persists between the generated responses and the appropriate real-world answers in certain medical consultation scenarios, particularly in emotionrelated contexts. For example, in Figure 1, when patients exhibit intense negative emotions during consultations due to the severity of their condition, employing a comforting tone while delivering solutions can significantly mitigate their psychological distress. However, existing LLM-based medical dialogue systems primarily focus on the rationality and accuracy of the responses’ knowledge. In contrast, emotion-focused large language models (Liu et al., 2024) emphasize emotion recognition without prioritizing the acquisition of medical knowledge. In this paper, we aim at developing an approach that effectively trains a model to deliver knowl # Patient Question hi doctor. I hope this isn’t too silly, but I had a ruptured eardrum seven years ago, and I recently went back to my doctor because I’ve been experiencing some hearing loss and pain in my right ear. They told me it healed but there’s a lot of scarring. So, um, why does it still hurt if it’s supposed to be better now? It’s completely understandable to feel concerned about your symptoms, and I’m here to help you understand Hello, A ruptured ear. It is a skin-like what's going on. It’s possible that the initial healing structure, and it heals like a skin that process left some scar tissue behind, which can indeed cause discomfort. I recommend we take a closer look at is cut heals. In your case, you had a perforation in the ear. your ears and see if there’s anything else we can do to help alleviate your pain. In the meantime, I want you to avoid loud noises, and we can also consider some additional treatments to help with your hearing. Doctor Response Model Empathic Response AI Figure 1: Illustration of an example about the emotional healthcare dialogue system. edgeable responses while maintaining a balance of emotional comfort, thereby enabling more realistic and human-centric interactions. Inspired by the exceptional creativity of large language models (Tsai et al.; Angel et al.), we first utilized them to modify the emotional tone of real-world doctor-patient dialogues. This approach generated patient queries infused with specific negative emotions, alongside medical responses designed to soothe the patients’ negative emotional states. We then applied three distinct approaches to fine-tune the base model using the aforementioned modified dialogues. The three fine-tuning methods are: 1) SFT (Supervised fine-tuning) (Wei et al., 2022), 2) DPO (Direct Preference Optimization) (Rafailov et al., 2024), 3) KTO (Kahneman-Tversky Optimization) (Ethayarajh et al., 2024). These approaches have been validated as effective strategies for aligning large language models to specific tasks. By integrating these techniques, the fine-tuned model can generate responses that balance knowledge delivery with emotional soothing. The effectiveness of our proposed methodology is verified through experiments on another doctor-patient dialogue with emotionspecific scenarios. We further analyze several factors that affect the performance of LLM, including fine-tuning methods, modified datasets, emotional categories, and evaluation models. To the best of our knowledge, this is the first LLM-based medical dialogue system to explore how to balance knowledge expression and empathy in real-world medical conversations. Additionally, our work enables medical dialogue systems to foster more meaningful interactions by addressing both the informational and emotional needs of patients, creating a more supportive consultation experience. The contributions of this paper are as follows: • We utilized a large language model to rewrite and generate patient consultations with negative emotions and medical responses aimed at soothing those emotions. • We experimented with three fine-tuning approaches to enable the model to learn how to balance knowledge delivery and emotional soothing. • We tested and analyzed the model’s performance to determine whether it could effectively balance knowledge and emotional expression on real-world medical dialogue dataset. # 2 Related Work # 2.1 Healthcare Conversations System Healthcare conversational system is an important yet challenging task in the medical domain. In recent advancements, large language models have exhibited remarkable capabilities in downstream tasks, reshaping the foundation of medical dialogue systems. According to the existing literature (Shi et al., 2024), the medical dialogue system can be broadly categorized into two groups based on their association with the emergence of large language models. The methods before the emergence of LLM are divided into three categories: retrievalbased methods, generation-based methods, and hybrid methods (Wang et al., 2023c). Retrieval-based medical dialogue systems are designed to select appropriate responses from the pre-built index (Tao et al., 2021; Zhu et al., 2022). Generation-based methods can be categorized into two approaches: pipeline and end-to-end. Pipeline methods generate system responses by utilizing multiple subcomponents (Zhang et al., 2020; Naseem et al., 2022), whereas end-to-end methods produce system responses directly from dialogue history and the associated knowledge base (Zhou et al., 2021; Zhao et al., 2022). Hybrid methods combine both approaches, using retrieval for efficiency and generative methods for flexibility (Yang et al., 2021; Li et al., 2018). Medical dialogue methods based on LLMs can be divided into two categories: prompting and fine-tuning general LLMs. Prompting methods give instructions to prompt LLMs to perform a task efficiently (Wang et al., 2023d; Gao et al., 2023; Tang et al., 2024; Singhal et al., 2022, 2023). The method of fine-tuning foundation models on medical data could align the LLMs with medical scenarios. (Ye et al., 2024; Toma et al., 2023; Wu et al., 2023; Li et al., 2023b; Han et al., 2023; Huang et al., 2022; Chen et al., 2023b; Liu et al., 2023; Wang et al., 2023b; Xiong et al., 2023; Wang et al., 2023a) # 3 Methodology To develop a model to deliver knowledge-rich responses while simultaneously addressing emotional comfort for emotion-sensitive healthcare conversations, we first construct a dataset tailored to this specific scenario. Then, we fine-tuned a base model to the constructed dataset with three renowned fine-tuning methods to enhance its ability. The details of the components are described in the following sections. # 3.1 Data Modification We constructed an emotional healthcare dialogue dataset, which consists of Empathetic Response(ER) and Emotional Question $\begin{array} { r } { ( \mathsf { R Q } ) + \mathsf { S } } \end{array}$ oothing Response(SR). The objective of the Empathetic Response (ER) is to enable the model to generate responses that exhibit empathy, even in the context of standard medical inquiries. On the other hand, the Emotional Question $( \mathrm { E Q } ) + \mathrm { S }$ oothing Response (SR) seeks to equip the model with the ability to handle patient consultations involving negative emotions by delivering informative responses alongside emotional reassurance. Both types of emotional dialogues are structured as single-turn utterances. # 2.2 Emotion Language Model Even though large language models demonstrate remarkable language understanding and generation capabilities, there is a considerable gap between the Emotional Intelligence (EI) capabilities of existing LLMs and humans. (Wang et al., 2023e; Sabour et al., 2024; Paech, 2024) propose comprehensive frameworks for Emotional Intelligence, including assessments of emotional understanding and application. (Li et al., 2023a; Liu et al., 2024; Xu et al., 2024) enhanced the LLMs with prompt or finetuning to improve the performance of Emotional Intelligence. We first divided an existing real-world singleturn medical dialogue dataset, which is collected from internet platforms, into two parts. Then, we designed distinct, tailored prompts to utilize a large language model for modifying the doctor’s responses in each dialogue of both parts because doctors often respond very briefly through internet platforms, lacking emotional tone. For the Empathetic Response(ER) part, the large language model was prompted to generate responses that exhibit empathy and compassion while retaining medical knowledge based on the given dialogue. For the Emotional Question (EQ) $^ +$ Soothing Response (SR) part, the large language model was prompted to rewrite the given dialogue into patient queries with negative emotions and responses that are reassuring yet maintain medical knowledge. Below is the prompt template we used for $\mathrm { E Q + S R }$ data. You will be given a dialogue between a patient and a dotor. Please rewrite the patient's question ensuring that it retains the original information while expressing a sense of {emotion}. At the same time, rewrite the doctor's response to retain the original information while soothing the patient's {emotion}. # 3.2 Supervised Fine-Tuning Supervised Fine-Tuning, which can also be referred to as instruction tuning (Zhang et al., 2024), is a crucial technique to enhance the capabilities and controllability of large language models. It involves further training LLMs using (INSTRUCTION, OUTPUT) pairs, where instructions serve to constrain the model’s outputs to align with the desired response characteristics or domain knowledge. We chose the LLaMA3 model (Grattafiori et al., 2024) as the base LLM architecture for further fine-tuning, since it is open source and has excellent language understanding and generation with relatively fewer parameters. We conducted SFT on the base model using the dataset we constructed in Section 3.1 to improve its abilities in emotion comprehension and soothing. Considering each prompt $X _ { i } = [ x _ { i , 1 } , x _ { i , 2 } , \ldots ]$ as well as its corresponding response $Y _ { i } = [ y _ { i , 1 } , y _ { i , 2 } , . . . ]$ from the healthcare dialogue dataset, the loss function of SFT stage can be defined as follows: $$ L _ { S F T } ( \theta ) = - \sum _ { i = 1 } ^ { N } \sum _ { t = 1 } ^ { T _ { i } } \log \bigl [ P ( y _ { i , t + 1 } \mid X _ { i } , y _ { i , 1 \dots t } , \theta ) \bigr ] , $$ where $N$ denotes the total number of training instances and $\theta$ denotes model parameters. # 3.3 Direct Preference Optimization Based on the previously validated training methods for LLMs (Ouyang et al., 2022), fine-tuning large language models using human preferences significantly improves their behavior on a wide range of tasks and shows promising generalization. One prominent approach is Reinforcement Learning with Human Feedback (RLHF), which employs reward models from response rankings to optimize the training of LLMs. However, RLHF is complex and prone to instability, requiring extensive hyperparameter optimization. To enhance stability, we utilized Direct Preference Optimization (DPO) to align the outputs of the SFT model with human preferences. Compared to RLHF, DPO offers a simpler and more efficient approach, as it eliminates the need for explicit reward modeling or reinforcement learning. To convert the dataset we constructed in Section 3.1 into the format required for DPO, we treated the modified soothing responses as the preferred responses and the original doctor responses as the rejected responses. Each training sample is a triplet consisting of a prompt, a preferred response, and a rejected response. For the $i .$ -th prompt $X _ { i }$ , our objective was to compute the log probabilities of the preferred response $Y _ { i , 1 }$ and the rejected response $Y _ { i , 2 }$ generated by the current model. Subsequently, we fine-tuned the model parameters to increase the likelihood of the preferred responses $Y _ { i , 1 }$ while reducing the likelihood of the rejected responses $Y _ { i , 2 }$ . This optimization process was guided by a loss function below: $$ \begin{array} { r } { L _ { D P O } ( \theta ) = - \displaystyle \sum _ { i } \log \sigma \big [ \beta \log \frac { P ( Y _ { i , 1 } \mid X _ { i } , \theta ) } { P ( Y _ { i , 1 } \mid X _ { i } , \theta ^ { 0 } ) } } \\ { - \beta \log \frac { P ( Y _ { i , 2 } \mid X _ { i } , \theta ) } { P ( Y _ { i , 2 } \mid X _ { i } , \theta ^ { 0 } ) } \big ] , } \end{array} $$ where $\sigma$ denotes the sigmoid function, $\theta ^ { 0 }$ means the initial parameters, $\beta$ serves as a hyperparameter that regulates the relative weighting of the two terms. # 3.4 Kahneman-Tversky Optimization Another preference optimization called KahnemanTversky Optimization (KTO) is a cost-effective method to align large language models with human feedback, enhancing performance without relying on preference pairs. To convert the dataset we constructed in Section 3.1 into the format required for KTO, we treated the modified soothing responses as the preferred responses and the original doctor responses as the rejected responses. In contrast to DPO, KTO does not need training data containing both preferred and rejected responses simultaneously. Each training instance consists of a prompt, a preferred or rejected response, and a binary label indicating whether the response is preferred or rejected. This optimization process was guided by a loss function below: $$ r _ { \theta } ( x , y ) = \log \frac { \pi _ { \theta } ( y \mid x ) } { \pi _ { \mathrm { r e f } } ( y \mid x ) } . $$ $$ z _ { 0 } = \mathrm { K L } \big ( \pi _ { \theta } ( y ^ { \prime } \mid x ) \big | \big | \pi _ { \mathrm { r e f } } ( y ^ { \prime } \mid x ) \big ) , $$ $$ \nu ( x , y ) = \left\{ \begin{array} { l } { \lambda _ { D } \sigma \big ( \beta \big ( r _ { \theta } ( x , y ) - z _ { 0 } \big ) \big ) , } \\ { \mathrm { i f } \operatorname { R e g e x } \big ( y , y _ { x } ^ { \star } \big ) = 1 } \\ { \lambda _ { U } \sigma \big ( \beta \big ( z _ { 0 } - r _ { \theta } ( x , y ) \big ) \big ) , } \\ { \mathrm { i f } \operatorname { R e g e x } \big ( y , y _ { x } ^ { \star } \big ) = 0 } \end{array} \right. $$ $$ L _ { \mathrm { K T O } } ( \pi _ { \theta } , \pi _ { \mathrm { r e f } } ) = \mathbb { E } _ { x , y \sim D } \left[ \lambda _ { y } - \nu ( x , y ) \right] . $$ # 4 Experiments To evaluate the effectiveness of our proposed pipeline, we conducted experiments using the dataset introduced in prior work (Li et al., 2023b), which consists of real-world conversations between patients and doctors. This dataset includes a 100k training set sourced from HealthCareMagic. com and a $\mathrm { 7 k }$ testing set from icliniq.com. We employed llama3 models (Grattafiori et al., 2024) with various fine-tuning methods to assess the efficacy of our approach. # 4.1 Setup The training set was divided into two subsets, each rewritten with an emotion-specific focus using an LLM: • Empathetic Response (ER): Approximately 60k entries from the training set were rewritten to transform original doctor responses into empathetic and compassionate replies. This modification was facilitated using the LLaMA3.1 model. • Emotional Question (EQ) + Soothing Response (SR): The remaining $5 0 \mathrm { k }$ entries were adapted by rephrasing patient questions to convey specific negative emotions. The corresponding doctor responses were rewritten to address the questions while mitigating these emotions. To create realistic scenarios, prompts representing five distinct negative emotions—fear, anxiety, embarrassment, frustration, and distrust—were used to guide the rewrites, leveraging the gpt-4o mini model (OpenAI et al., 2024). For our experiments, we selected llama-3.2 as the base model, a multilingual LLM optimized for dialogue in multilingual contexts. Specifically, we used its instruction-tuned generative variant with 1B parameters for fine-tuning. The base models (Zheng et al., 2024) were fine-tuned for one epoch on our emotion-enhanced dataset, with hyperparameters largely aligned with those used for the original llama-3.2 model. The training input consisted of task instructions and the patient’s medical inquiry, with the objective of maximizing the likelihood of generating the correct medical response. This process was carried out on a V100 GPU with 32GB of memory. To evaluate the fine-tuned models, we measured accuracy on a test set adapted using the same methodology as the $\mathbf { E Q + S R }$ subset of the training set. This ensured consistency in assessing the model’s ability to address queries expressing negative emotions and provide corresponding alleviating responses. # 4.2 Evaluation To assess whether the fine-tuned model could balance knowledge delivery and emotional support, we employed task-specific instructions and two large language models as evaluators: Qwen2.5-7B-instruct (Yang et al., 2024b), which excels across diverse NLP benchmarks, and Emollama-chat-7b (Liu et al., 2024), which specializes in emotion recognition tasks. Additionally, we used ROUGE (Lin, 2004) and BLEU (Papineni et al., 2002) scores to measure the n-gram similarity between generated responses and original doctor responses. # 4.3 Results We present the results of our evaluations below. Baseline comparisons included the original llama-3.2 model and a prompt-based approach for generating emotional responses. The model’s performance in mitigating negative emotions and its ability to deliver medical knowledge are discussed in Sections 4.3.1 and 4.3.2, respectively. # 4.3.1 Emotion Score Table 1 presents the results of an evaluation where EmoLLaMA assigned numerical scores to the emotional intensity of responses. Higher values indicate stronger emotional content. Metrics were calculated for three key emotions—empathetic, comforting, and reassuring—as well as their average and maximum values. Our fine-tuned models consistently outperformed the original model and the prompt-based approach across all metrics. Among the methods tested, fine-tuning with DPO demonstrated the most significant improvements. DPO not only increased the likelihood of generating emotionally rich responses but also minimized the probability of producing emotionally deficient ones. Direct fine-tuning using the $_ \mathrm { E Q + S R }$ context proved particularly effective, achieving superior results with a smaller dataset. Specifically, fine-tuning with $_ \mathrm { E Q + S R }$ data using DPO improved the average and maximum metrics by 0.03 and 0.13, respectively, compared to the prompt-based approach and the base model. These results confirm that our revised dataset and fine-tuning process significantly enhance the emotional soothing capabilities of the dialogue system. Table 1: Emotional intensity on the test set with Emollama as the evaluator. Bold: the highest score; underlined: second highest. Table 2: BLEU and Rouge scores on the test set. Bold: the highest score. # 4.3.2 Knowledge Score To ensure the model retained essential medical knowledge, we compared its generated responses against the original doctor responses and the emotionally modified responses using ROUGE and BLEU scores (Table 2). The fine-tuned model consistently outperformed both the original base model and the prompt-based approach across all evaluation metrics. Notably, KTO and SFT approaches achieved better performance than DPO. This may be attributed to the fact that paired responses in DPO’s training data already contain substantial knowledge, limiting its ability to enhance further. In contrast, SFT’s focus on a single correct response allows it to better capture and internalize the required knowledge. Fine-tuning with $\mathrm { E R + E Q + S R }$ data using SFT and KTO yielded a 27-point improvement in BLEU1 scores compared to both the prompt-based approach and the base model when evaluated against the modified responses. Similar trends were observed for comparisons against original doctor responses, with a 15-point improvement. Figure 2: Preference selection based on the knowledgeable and emotional dimensions of Qwen’s responses. These results demonstrate that our approach effectively integrates emotional support with the accurate medical knowledge necessary to address patient inquiries. # 4.4 Ablation Study To compare the quality of responses from different methods, we presented the various responses to the Qwen2.5 model simultaneously, allowing it to select the most knowledgeable or empathetic response. In the left part of Figure 2, we plotted the preference selections of the Qwen model across different methods based on the richness of knowledge in the responses. In the right part of Figure 2, we visualized the preference selections of the Qwen model across different methods based on the level of reassurance provided in the responses. In these two charts, we compared the impact of using two different datasets, specifically examining the effect of incorporating ER data for pre-fine-tuning. Our finding indicates that regardless of the training method used—SFT, DPO, or KTO—models pre-fine-tuned with the ER dataset consistently demonstrated greater preference in both knowledge and emotional selection criteria. This is particularly evident in the emotional selection, as the ER dataset is specifically designed to enable the model to provide empathetic responses even when addressing standard informational con tent. # 4.5 Qualitative Analysis In Table 3, there are some examples of emotional questions and soothing responses generated by our fine-tuned models. Based on the analysis of the models’ responses in case (a), it is evident that all three approaches initially focus on alleviating patient anxiety and demonstrating empathy, followed subsequently by the provision of medical knowledge and recommendations. As discussed in the previous section, the DPO approach is particularly effective in fostering the ability to provide emotional reassurance and, therefore, tends to emphasize empathetic expression in its responses. However, this heightened focus on emotional support can occasionally lead to a diminished emphasis on knowledge transmission, as exemplified by the response to case (b). Conversely, the SFT and KTO approaches facilitate more robust knowledge acquisition, resulting in improved informational clarity, while still maintaining an appropriate balance of empathetic language.
With the advancement of large language models, many dialogue systems are now capable of providing reasonable and informative responses to patients' medical conditions. However, when patients consult their doctor, they may experience negative emotions due to the severity and urgency of their situation. If the model can provide appropriate comfort and empathy based on the patient's negative emotions while answering medical questions, it will likely offer a more reassuring experience during the medical consultation process. To address this issue, our paper explores the balance between knowledge sharing and emotional support in the healthcare dialogue process. We utilize a large language model to rewrite a real-world interactive medical dialogue dataset, generating patient queries with negative emotions and corresponding medical responses aimed at soothing the patient's emotions while addressing their concerns. The modified data serves to refine the latest large language models with various fine-tuning methods, enabling them to accurately provide sentences with both emotional reassurance and constructive suggestions in response to patients' questions. Compared to the original LLM model, our experimental results demonstrate that our methodology significantly enhances the model's ability to generate emotional responses while maintaining its original capability to provide accurate knowledge-based answers.
[ "cs.CL", "cs.AI" ]
# I. INTRODUCTION Nowadays, the application domains of Unmanned Aerial Vehicles (UAVs) have rapidly expanded, penetrating extensively into critical fields such as military, civilian, and commercial sectors [1]. Leveraging their flexibility and adaptability, UAVs have demonstrated significant advantages in diverse tasks, including surveillance and monitoring, facility inspection, logistics delivery, and recreational applications. Among these, achieving safe and autonomous landing in complex and unknown environments stands out as one of the most challenging key technologies for UAVs, holding substantial application value in emergency response scenarios and situations with limited human intervention. Traditional landing methods primarily rely on predefined markers or specific landing points; however, these prerequisites are often difficult to satisfy in practical application scenarios. Therefore, the development of advanced algorithms and technologies to enable UAVs to autonomously identify safe landing zones has become a crucial research topic [2]. The core challenge for UAVs to achieve safe landing in unknown regions lies in accurately identifying suitable safe landing areas [3]. Existing methods fall into two categories: multisensor fusion and pure vision-based algorithms. Multi-sensor approaches integrate data from RGB-D cameras, LiDAR, and IMUs for high-precision terrain reconstruction, but suffer from high hardware costs. Pure vision-based methods, while cost-effective, struggle with scale ambiguity and geometric reliability [2]. Fig. 1. Top: Comparative performance of our method versus baseline approaches on both In-domain and Cross-domain test benchmarks for the SLZ estimation task. The proposed method demonstrates enhanced robustness and superior generalization capabilities across all key metrics. Bottom: Zero-shot testing results on real-world flight data. The green mask indicates pixels predicted as safe landing zones, blue bounding boxes highlight the top-5 landing candidate regions based on area calculation, and yellow annotations display the estimated areas of candidate zones. These results validate the practical applicability of our method in real-world scenarios. Fortunately, monocular 3D perception technology, represented by Metric3D V2, has recently achieved breakthrough progress. This technology, through deep learning methods, realizes joint optimization of depth prediction and normal estimation, achieving state-of-the-art (SOTA) performance across multiple benchmarks. Its exceptional zero-shot generalization capability establishes it as a foundational model in the field of monocular 3D perception [4]. Depth information and surface normal information provide UAVs with critical cues regarding landing zone distance and terrain characteristics, offering new research directions for SLZ estimation based on pure vision algorithms. Building upon the Metric3D V2 framework, we propose an end-to-end UAV SLZ estimation framework—VisLanding. Specifically, we introduce a new safe zone segmentation branch to the existing Metric3D V2 framework, leveraging its depth prediction and normal estimation capabilities as auxiliary features. The safe landing zone estimation task is transformed into a binary semantic segmentation problem, enabling the delineation of safe zones within monocular images. We first fine-tuned the model using the WildUAV dataset [5] to enhance its monocular depth estimation capability from a UAV perspective. As one of the few high-quality real-world datasets for UAV Monocular Depth Estimate tasks, WildUAV effectively mitigates domain discrepancy issues associated with synthetic data. Furthermore, we manually annotated safe landing zone labels for the mapping set of the WildUAV dataset with depth annotations to train the model’s safe zone estimation capability. In addition, we constructed an evaluation dataset using the Semantic Drone Dataset [7] to test the model’s cross-domain generalization and robustness. The main contributions of this work are summarized as follows: We innovatively employ the Metric3D V2 model to replace complex multi-sensor systems for acquiring critical 3D information required for SLZ identification. This approach not only fully leverages Metric3D V2’s strengths in depth prediction and surface normal estimation but also significantly reduces system complexity and hardware costs. We constructed a training dataset and a cross-domain evaluation dataset based on real-world scenarios, providing essential data support for research on SLZ estimation using monocular vision algorithms. As demonstrated in Fig. 1, the proposed method demonstrates excellent performance with strong generalization and robustness, outperforming existing approaches. It combines post-processing to achieve direct estimation of the safe landing zone area, better aligning with practical application requirements, and providing an effective solution for safe landing of UAVs. # II. RELATED WORK # A. Monocular 3D Perception Monocular 3D perception, as a fundamental task in computer vision, faces significant challenges in metric-scale reconstruction due to the inherent lack of depth information and scale ambiguity in single-view images. Recent research has demonstrated substantial application value in practical scenarios such as autonomous driving through crossdomain generalization strategies and unsupervised learning frameworks. ZoeDepth [6] innovatively combines pretrained relative depth models with metric-scale fine-tuning strategies, establishing a new paradigm for scale-sensitive depth estimation. UniDepth [8] introduces a self-promptable camera module that decouples camera parameters from depth features through pseudo-spherical representation, enabling cross-domain metric reconstruction from single views. ZeroDepth [9] builds upon the Perceiver IO architecture to construct a unified multi-domain framework, effectively addressing the challenge of absolute depth prediction in both indoor and outdoor scenes. Depth Anything V2 [10] leverages largescale unlabeled data to establish a data-driven paradigm, significantly reducing generalization errors. Diffusion-based approaches, such as Marigold [11] and DiffusionDepth [12], enhance detail reconstruction accuracy through probabilistic modeling mechanisms. Metric3D V2 [4] proposes the concept of a canonical camera space, effectively resolving scale ambiguity by unifying camera parameter representations. This approach achieves joint depth-normal estimation under zero-shot conditions. It attains state-of-the-art performance on multiple benchmarks, establishing itself as a foundational model, providing strong support for metric 3D reconstruction in open-world scenarios. # B. Safe Landing Zones Estimation With the increasing demand for autonomous operations of UAVs in both civilian and military fields, the estimation of SLZ has garnered significant attention from both academia and industry. To address the technical challenges of identifying suitable landing areas in non-cooperative or dynamic environments, researchers have proposed various solutions from multiple dimensions. In recent years, thanks to the rapid development of deep learning technologies, computer vision has become one of the key technologies for achieving autonomous UAV landing. Early studies [13], [14], [15], [16] have explored the use of deep neural networks for SLZ estimation. Building on this, SafeUAV [17] innovatively combined depth estimation with SLZ estimation, proposing a specialized synthetic dataset for task training. This system can achieve depth estimation through RGB images and classify terrains into categories such as horizontal and vertical to distinguish SLZ, demonstrating good performance while ensuring real-time capabilities. Chen et al. (2022) [18] developed an autonomous landing system based on a binocular LiDAR sensor system, which integrates a terrain understanding model to achieve simultaneous depth completion and semantic segmentation. This method enables UAVs to accurately infer the morphological features and semantic information of the terrain, thereby achieving highprecision SLZ estimation in complex environments, albeit with relatively high system complexity and hardware costs. Abdollahzadeh et al. (2022) [19] proposed a depth regression model based on a semantic segmentation framework, which can generate continuous safety score maps, providing a more refined landing safety assessment compared to traditional binary classification methods. Additionally, Serrano and Bandala (2023) [20] innovatively applied the YOLO (You Only Look Once) [21] series of object detection algorithms to safe landing zones estimation tasks. In the latest research, Loera-Ponce et al. (2024) [22] employed the advanced vision transformer network SegFormer [23] to perform semantic segmentation on images captured by UAVs. By mapping segmentation categories to different risk levels for risk assessment, they provided an effective solution for SLZ estimation in emergency situations. # III. METHOD # A. Preliminaries 1) SLZ estimation via segmentation: In recent studies based on segmentation approaches, there has been a growing trend toward adopting multi-level risk assessment frameworks, which divide landing zones into $K$ discrete risk levels $\left( K \geq 3 \right)$ ). However, such continuous grading strategies can increase the decision-making complexity of the model. Therefore, we argue that it is sufficient to distinguish between safe and unsafe regions, while imposing stricter constraints on the conditions for determining safety to reduce the risk of model misjudgment and avoid potential losses. Based on this, we formulate the SLZ estimation as a binary segmentation problem: given an RGB image $\boldsymbol { I } \in \mathbb { R } ^ { H \times W \times \dot { 3 } }$ captured by a drone, a deep neural network $f _ { \theta } : I \mapsto M$ is used to map it to a binary segmentation mask $M \in \{ 0 , 1 \} ^ { H \times W }$ , where: Fig. 2. Dual-flow optimization framework. (Left) Main pipeline: Input images are transformed into canonical space via camera parameters $\mathbf { K }$ , processed by DINOv2 [25] ViT [26] for multi-scale feature extraction $( F _ { 1 / 4 } , F _ { 1 / 7 } , F _ { 1 / 1 4 } )$ , then refined through $T$ iterations in the Dense Prediction Transformer (DPT [27]). (Right) Dual optimization flows: The depth-normal flow (green) employs three ConvGRU blocks with projection heads for multi-scale refinement $\{ H _ { t } ^ { 1 / 4 } , H _ { t } ^ { 1 / 7 } , H _ { t } ^ { \hat { 1 } / 1 4 } \}$ , while the SLZ flow (orange) uses single ConvGRU with geometric fusion of updated depth/normal. Final predictions are obtained through upsampling and inverse canonical transformation. $$ m _ { i j } = { \left\{ \begin{array} { l l } { 0 } & { { \mathrm { ( s a f e ~ p i x e l ) } } } \\ { 1 } & { { \mathrm { ( u n s a f e ~ p i x e l ) } } } \end{array} \right. } $$ 2) Metric3D $\scriptstyle { V 2 }$ model: The Metric3D V2 model achieves monocular geometry perception through two key techniques: firstly, it resolves the scale ambiguity of images by employing canonical camera space transformation, and secondly, it introduces a novel depth-normal joint optimization mechanism. The specific pipeline of the model is as follows: First, the input image $I _ { R }$ and depth label $D _ { R } ^ { \mathrm { g t } }$ undergo a projection transformation with canonical focal length $f _ { C }$ to generate the canonical space image $D _ { C }$ and its corresponding depth label $D _ { C } ^ { \mathrm { g t } }$ . Subsequently, the encoder extracts multiscale features from the image, and the decoder predicts the initial canonical depth $D ^ { 0 }$ and unnormalized normal $N ^ { 0 }$ . Finally, these predictions are fed into the DPT [27] module for $T$ -step iterative joint optimization, outputting the final predicted canonical depth $D _ { C }$ and predicted normal $N$ . During the training process, the model employs canonical space depth labels to supervise depth prediction, while using normal labels to supervise the prediction process of the normal branch. Considering the scarcity of normal label data, Metric3D V2 innovatively proposes to utilize the predicted real depth $D _ { R }$ to impose depth-normal consistency constraints on normal prediction, thereby providing supplementary supervision for normal prediction. In terms of this task, a qualified SLZ must satisfy three basic conditions: flat terrain, absence of obstacles, and sufficient landing area [2]. The depth and normal information predicted by the Metric3D V2 model is closely related to these factors, providing crucial 3D prior information for SLZ estimation and further enabling the estimation of landing zone area. However, existing research still exhibits shortcomings in underutilizing such 3D information. # B. SLZ Estimation via Depth-Normal Joint Optimization Autonomous UAVs landing presents a highly challenging task that requires the system to accurately understand the geometric characteristics and safety of the environment. Previous deep learning-based approaches often fail to fully utilize 3D information, tend to produce physically implausible predictions, and typically suffer from overfitting to specific scene data. 1) Pipeline: Building upon Metric3D V2, we propose a novel end-to-end multi-task collaborative optimization framework that achieves comprehensive understanding of SLZ through joint learning of depth estimation, normal prediction, and safe landing zone estimation. The overall pipeline largely follows the design of Metric3D V2 (as shown in the left part of Figure.2). For an image $I \in \mathbb { R } ^ { H \times W \times 3 }$ captured by the UAVs, we first transform it to the canonical camera space defined by Metric3D V2 based on camera parameters. In the canonical camera space, we employ DINO Vision Transformer to extract multi-scale image features, followed by a decoder that generates initial depth-normal prediction flow and SLZ prediction flow in low-resolution space. These flows are then iteratively optimized $T$ times through the DPT module. Finally, we obtain predicted depth, normal, and SLZ in real space through upsampling and decanonical space transformation (Note: only depth prediction requires canonical/real space transformation). 2) SLZ Estimation Optimization: In the Metric3D V2 architecture, the iterative optimization process of DPT only includes a single depth-normal data flow. To fully utilize its pre-trained capabilities while maintaining the original structure, we propose a dual-flow optimization framework as shown in the right part of Figure.2, which adds a SLZ prediction branch. The core of the DPT module consists of a recurrent update module, which contains ConvGRUs and a projection head: ConvGRUs enhances spatial feature processing capability by introducing convolutional operations, while the projection head (a lightweight CNN) is responsible for predicting the update quantities of various parameters. In the initial optimization stage, DPT receives multi-scale hidden features (containing rich semantic information) output from the decoder. ConvGRUs updates the hidden feature $H$ by integrating all input variables, and after predicting the update quantities through the projection head, it passes the updated hidden features and prediction information to the next iteration step until $T$ optimizations are completed. The two data flows adopt differentiated designs: Depth-Normal Flow (green part): Contains 3 ConvGRU sub-blocks and 2 projection heads, processing multi-scale feature maps $\{ H _ { t } ^ { 1 / 4 } , \dot { H } _ { t } ^ { 1 / 7 } , H _ { t } ^ { 1 / 1 4 } \}$ , with depth and normal information as input variables. Its optimization process can be described as follows: $$ \begin{array} { r l } & { \{ H _ { t + 1 } ^ { 1 / 4 } , \cdot \cdot \cdot \} = \mathrm { C o n v G R U s } ( D _ { t } \otimes N _ { t } , \{ H _ { t } ^ { 1 / 4 } , \cdot \cdot \cdot \} ) , } \\ & { \Delta D _ { t + 1 } = \mathrm { P r o j } _ { \mathrm { d } } ( H _ { t + 1 } ^ { 1 / 4 } ) , \quad D _ { t + 1 } = \Delta D _ { t + 1 } + D _ { t } , } \\ & { \Delta N _ { t + 1 } = \mathrm { P r o j } _ { \mathrm { n } } ( H _ { t + 1 } ^ { 1 / 4 } ) , \quad N _ { t + 1 } = \Delta N _ { t + 1 } + N _ { t } } \end{array} $$ Safe Landing Zone Flow (orange part): Adopts a single ConvGRU sub-block and projection head design, processing only high-resolution feature maps $H _ { t } ^ { 1 / 4 }$ to control computational costs. During iteration, it integrates SLZ, updated depth, and normal information to collaboratively optimize safe zone prediction through 3D geometric features. The optimization process can be described as follows: $$ \begin{array} { r l } & { \mathbb H _ { t + 1 } ^ { 1 / 4 } = \mathrm { C o n v G R U } ( D _ { t + 1 } \otimes N _ { t + 1 } \otimes S L Z _ { t } , H _ { t } ^ { 1 / 4 } ) , } \\ & { \Delta S L Z _ { t + 1 } = \mathrm { P r o j } _ { \mathrm { s l z } } ( \mathbb H _ { t + 1 } ^ { 1 / 4 } ) , } \\ & { S L Z _ { t + 1 } = \Delta S L Z _ { t + 1 } + S L Z _ { t } } \end{array} $$ 3) Training: Since the training data of Metric3D V2 mainly consists of ground-level and indoor scenes, which significantly differ in scale from UAV perspectives, we adopt a two-stage training strategy: first fine-tuning the base model using UAV aerial data, then freezing the backbone network parameters and training the SLZ prediction branch separately. Referring to the fine-tuning protocol of Metric3D V2, a combined loss function is used in the fine-tuning stage: $$ L _ { \mathrm { f i n e - t u n e } } = \lambda _ { 1 } { \cdot } L _ { \mathrm { v n l } } { + } \lambda _ { 2 } { \cdot } \sum _ { t = 0 } ^ { T } \gamma ^ { T - t } ( L _ { \mathrm { L 1 } } ^ { t } { + } L _ { \mathrm { c o n f } } ^ { t } ) { + } \lambda _ { 3 } { \cdot } L _ { \mathrm { d n c l } } $$ where: The balancing coefficients $\lambda _ { 1 } , \lambda _ { 2 }$ , and $\lambda _ { 3 }$ control the contribution of each loss term, which are set to 0.2, 0.5, and 0.01, respectively. The temporal decay factor $\gamma$ , which controls the weight decay rate of historical predictions, is set to 0.9. Here, $t \in [ 0 , T ]$ denotes the $t { \mathrm { . } }$ - th prediction stage, where the prediction at $t = 0$ refers to the initial prediction provided by the Mono Decoder. • Virtual Normal Loss $( L _ { \mathrm { v n l } } )$ randomly samples $\mathbf { N }$ sets of three points in the image plane to form virtual planes, evaluating the difference between the virtual normals computed from the predicted depth and the ground truth: $$ L _ { \mathrm { v n l } } = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \| \boldsymbol { \hat { n } } _ { \mathrm { p r e d } } ^ { ( i ) } - \boldsymbol { \hat { n } } _ { \mathrm { g t } } ^ { ( i ) } \| $$ Here, $\hat { n } _ { \mathrm { { p r e d } } }$ and $\hat { n } _ { \mathrm { g t } }$ denote the unit normal vectors computed from the predicted depth $\hat { D } _ { \mathrm { { p r e d } } }$ and the ground truth depth $D _ { \mathrm { g t } }$ , respectively. Sequential Depth Loss is a temporally optimized term specifically designed for multi-stage prediction networks, encompassing pixel-wise L1 depth regression loss and confidence calibration loss: $$ \begin{array} { l } { { \displaystyle { \cal L } _ { \mathrm { L 1 } } ^ { t } = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \vert \hat { D } _ { i } ^ { t } - D _ { i } ^ { g t } \vert } , } \\ { { \displaystyle { \cal L } _ { \mathrm { c o n f } } ^ { t } = \frac { 1 } { M } \sum _ { i = 1 } ^ { M } \vert \hat { C } _ { i } ^ { \ t } - C _ { i } ^ { g t } \vert } } \end{array} $$ Here, $D _ { i } ^ { \mathrm { g t } }$ denotes the depth label transformed into the canonical camera space, and $C _ { i } ^ { \mathrm { g t } }$ represents the corresponding confidence label. • Depth-Normal Consistency Loss $( L _ { \mathrm { d n c l } } )$ leverages the camera intrinsic parameters $K$ to establish a differentiable transformation between the depth map and the normal map, enforcing geometric consistency: $$ L _ { \mathrm { d n c l } } = \frac { 1 } { M } \sum _ { i = 1 } ^ { M } ( 1 - \hat { N } _ { i } \cdot N _ { i } ) $$ Here, $\hat { N } _ { i }$ represents the normal vector converted from the predicted depth, and $N _ { i }$ denotes the output of the normal estimation branch. The dot product operation measures the directional consistency between the two. Inspired by the Sequential Depth Loss, we adopt a weighted cross-entropy loss to optimize the SLZ estimation branch. The mathematical formulation of the loss is defined as follows: $$ L _ { \mathrm { s a f e } } = \sum _ { t = 0 } ^ { T } \gamma ^ { T - t } \left[ - \frac { 1 } { N } \sum _ { j = 1 } ^ { N } \sum _ { c = 1 } ^ { C } Y _ { j , c } \log ( P _ { t , j , c } ) \right] $$ Here, the definitions of $\gamma , T$ , and $t$ remain consistent with the previous sections, $Y _ { j , c }$ represents the ground truth label for the $j$ -th pixel and class $c$ , and $P _ { t , j , c }$ denotes the predicted probability for the $j$ -th pixel belonging to class $c$ at time step $t$ .Additionally, class-specific weights are incorporated into the loss computation to address class imbalance, with a weight of 2 assigned to the safe class and a weight of 1 assigned to the unsafe class. # IV. EXPERIMENTS # A. Datasets 1) WildUAV: The WildUAV dataset is a real-world image dataset designed for UAV environmental perception tasks, primarily used for validating depth estimation and semantic segmentation algorithms in complex scenarios. The data was captured using a DJI Matrice 210 RTK V2 UAV equipped with a Zenmuse X5s camera system, with image resolutions of $5 2 8 0 \times 3 9 5 6$ pixels and $3 8 4 0 \times 2 1 6 0$ pixels. The UAV operated at altitudes ranging from 20 to 30 meters, covering diverse scenes such as fields and forests, and provides precise absolute depth annotations. Due to the lack of safety zone annotations in the original dataset, we manually annotated 1329 images from the mapping subset based on the principle of ”flat and obstacle-free” with ISAT with segment anything tool [24], dividing them into 1065 training samples and 264 validation/test samples. To meet experimental requirements, all images were resized to 1/4 of their original resolution for training and evaluation. Example data are shown in Fig. 3(a). 2) Semantic Drone: The Semantic Drone dataset focuses on semantic segmentation tasks in urban scenes, aiming to enhance the safety of UAV autonomous flight and landing. The dataset contains high-resolution images of $6 0 0 0 \times 4 0 0 0$ pixels, captured at altitudes ranging from 5 to 30 meters, with annotations covering over 20 semantic categories such as trees, grass, roads, vehicles, and buildings. To address the lack of safety zone annotations, we followed the criteria in [22], defining grass, predefined markers, and paved areas within “ideal landing zones” and “low-risk zones” as safety zones, while categorizing all other classes as non-safety zones. Additionally, morphological dilation was applied to the boundaries of non-safety zones to simulate high-risk buffer areas. All 400 publicly available images were used as the cross-domain test set, resized to 1/4 of their original resolution for testing. Example data are shown in Fig. 3(b). # B. Implementation Details To balance the algorithm’s accuracy and real-time requirements, this study improves upon a small version of the Metric3D V2 pretrained model (800k steps of pretraining). Under the premise of adhering to Metric3D V2’s fine-tuning protocol, the initial learning rate for the encoder is set to $1 \times 1 0 ^ { - 5 }$ , while the initial learning rate for the decoder is set to $1 \times 1 0 ^ { - 4 }$ . The Adam optimizer is employed, with the DPT optimization module set to iterate for $T = 4$ steps. In the data augmentation module, the RandomResize ratio range is set to [0.5, 0.99], and the batch size is set to 4, with a total of $2 0 \mathrm { k }$ training steps. To fully utilize the unlabeled data from WildUAV, a semi-supervised training framework is introduced starting from the $^ { 4 \mathrm { k } }$ step, updating the teacher model parameters using the Exponential Moving Average (EMA) strategy. In the second phase, all parameters of the original Metric3D model are fixed, and the safety landing zone prediction branch (SLZ flow) is trained separately, with an initial learning rate of $1 \times 1 0 ^ { - 4 }$ , a batch size of 4, and a training duration of $5 \mathrm { k }$ steps. The experiments are implemented based on the PyTorch and MMSegmentation frameworks, with the hardware platform being an NVIDIA RTX 4080 GPU. Fig. 3. Sample Examples from WildUAV and Semantic Drone Datasets. The WildUAV dataset provides annotations for both depth estimation and safe landing zones (highlighted by green masks), while the Semantic Drone dataset includes annotations exclusively for safe landing zones (indicated by green masks). # C. Evaluation and Comparison In this study, centered around the practical requirements of SLZ segmentation tasks, we designed a dualdimensional evaluation system encompassing in-domain and cross-domain assessments. While ensuring fundamental segmentation accuracy, the system primarily validates the model’s domain generalization capability and robustness. The experiments covered four models from the SafeUAV series (big/big-pretrain) and SegFormer (mitb0/mitb1), with evaluation metrics including: • aAcc (Average Accuracy): The average classification accuracy across all classes. • mIoU (Mean Intersection over Union): The average IoU across all classes, measuring the overlap between predicted and ground truth regions. • mAcc (Mean Accuracy): The average pixel accuracy across all classes. mDice (Mean Dice Coefficient): The average Dice coefficient across all classes, evaluating the overlap between predicted and ground truth regions. mFscore (Mean F1 Score): The average F1 score across all classes, balancing Precision and Recall. mPrecision (Mean Precision): The average Precision across all classes, measuring the accuracy of positive predictions. mRecall (Mean Recall): The average Recall across all classes, measuring the completeness of positive Fig. 4. Experimental results of safe landing zone area estimation in real-flight images show improved accuracy when closer to targets, with both measure areas being approximately $3 4 ~ \mathrm { m } ^ { 2 }$ . Fig. 5. Qualitative prediction results of our method compared with other approaches on two test benchmarks. The proposed method demonstrates significant advantages in terms of generalization capability and robustness, outperforming other methods in challenging scenarios. predictions. The training settings strictly adhered to benchmark methods: SegFormer was fine-tuned for 30 epochs based on ImageNet-1k pre-trained weights [22], while the base SafeUAV models were trained on WildUAV for 50 epochs [17]. The pre-trained versions were first pre-trained on the HOV dataset [17] for 50 epochs and then transferred to WildUAV for further training. Our evaluation experiments are conducted in two dimensions: In-domain: Models are trained and tested on the WildUAV dataset. Cross-domain: Models are trained on the WildUAV dataset but directly transferred to the Semantic Drone dataset for testing. The WildUAV dataset consists of images captured in wild environments, while the Semantic Drone dataset contains images collected in urban settings. The significant domain gap between these two datasets makes them particularly suitable for cross-domain evaluation, effectively assessing the generalization capability and robustness of the models. As shown in the left part of Table I, all models achieved usable performance levels in the in-domain tests based on the aAcc (Average Accuracy) metric. Among them, SegFormer demonstrated significant superiority across all metrics due to its large-scale pre-training advantage on segmentation tasks. In contrast, our method, which did not undergo taskspecific pre-training for segmentation, exhibited a measurable performance gap. However, in the cross-domain tests (right part of Table I), the benchmark models experienced a notable performance degradation. Taking the key metrics aAcc and mIoU as examples, the SegFormer series showed a decline of approximately $45 \%$ and $64 \%$ , respectively, compared to its in-domain performance, while the SafeUAV series dropped by about $42 \%$ and $60 \%$ . In stark contrast, our method only decreased by approximately $2 5 \%$ and $3 3 \%$ , and significantly outperformed the benchmark models in absolute performance metrics. This performance disparity is further corroborated by the qualitative comparisons in Fig. 5, where our method maintains consistent prediction coherence under domain shifts while competitors exhibit fragmented outputs. This result indicates that traditional semantic segmentation methods suffer from clear domain-specific biases, leading to severe overfitting issues. In comparison, our method effectively mitigates cross-domain performance degradation by leveraging 3D information, demonstrating stronger applicability in real-world complex environments. TABLE I MODEL PERFORMANCE COMPARISON RESULTS ON IN-DOMAIN AND CROSS-DOMAIN METRICS, BOLD AND UNDERLINE DENOTE THE BEST AND SECOND-BEST RESULTS. Furthermore, to address real-time requirements, we conducted experiments with different optimization iteration steps, as shown in the lower part of Table I. The results indicate that even with only 1 optimization step, the performance degradation remains entirely acceptable, with minimal impact on the model’s generalization capability, and our method still maintains a significant advantage in cross-domain tests. This characteristic enables our method to significantly improve inference speed in computationally constrained scenarios while maintaining high performance levels. # D. Estimation of Safe Landing Zone Area Our model also inherits the powerful zero-shot prediction capabilities of Metric3D V2 for absolute depth and normal maps. By leveraging this information along with the intrinsic matrix of the onboard camera, we can achieve end-to-end rapid rough estimation of safe zone areas, providing critical information for landing site selection. Using the camera intrinsic matrix, depth map, and normal map, pixels in the image are projected into the real-world coordinate system, and the tilt effect is corrected using normal information to accurately estimate the actual area of the selected polygonal region. Specifically, given the camera intrinsic matrix $K$ and depth map $D$ , the pixel coordinates $( u , v )$ in the image can be projected to the 3D point $( X _ { c } , Y _ { c } , Z _ { c } )$ in the camera coordinate system, calculated as: $$ \left\{ \begin{array} { l l } { X _ { c } = \displaystyle \frac { \left( u - c _ { x } \right) \cdot D ( u , v ) } { f _ { x } } } \\ { Y _ { c } = \displaystyle \frac { \left( v - c _ { y } \right) \cdot D ( u , v ) } { f _ { y } } } \\ { Z _ { c } = D ( u , v ) } \end{array} \right. $$ where $f _ { x }$ and $f _ { y }$ are the focal lengths of the camera, and $c _ { x }$ and $c _ { y }$ are the principal point coordinates. The surface tilt effect is corrected using the $Z$ component of the normal vector $\mathbf { n } = ( n _ { x } , n _ { y } , n _ { z } )$ , and the projected area of a single pixel is calculated as: $$ \operatorname { \cal A } _ { \mathrm { a c t u a l } } ( u , v ) = { \frac { D ( u , v ) ^ { 2 } } { f _ { x } \cdot f _ { y } \cdot | n _ { z } ( u , v ) | } } $$ The total actual area is obtained by summing the actual area contributions of all pixels within the selected polygonal region: $$ A _ { \mathrm { t o t a l } } = \sum _ { ( u , v ) \in \mathrm { S L Z } } A _ { \mathrm { a c t u a l } } ( u , v ) $$ In tests on real flight data not involved in training (as shown in Figure.4), estimation experiments on two preset safe landing zones with areas around $3 4 \mathrm { m } ^ { 2 }$ showed that as the drone approached the target area, the improvement in depth estimation accuracy significantly reduced area errors. Notably, the system exhibited conservative estimation characteristics in long-distance scenarios (estimates systematically lower than actual measurements), providing safety redundancy for autonomous landing decisions. # E. Limitations Although our method has already demonstrated good generalization performance and low hardware complexity in real-world scenarios, its performance has not been validated on larger-scale datasets. Therefore, we plan to collect and construct larger datasets for training and evaluation in the future. Despite the relatively lower in-domain metrics compared to other methods, overfitting on in-domain data often leads to reduced generalization on cross-domain tasks, which is a common issue in the field. Thus, we prioritize generalization performance in real-world scenarios over merely achieving good results on test sets with distributions identical to the training set. Furthermore, in subsequent work, we will further investigate methods to achieve strong performance in both in-domain and cross-domain settings. However, due to the large parameter size and high inference cost of the Vision Transformer model, our method cannot meet the high realtime requirements (e.g., $< 5$ FPS on Jetson Orin Nano) for practical applications such as online processing on drones. To address this, we will explore techniques such as model distillation and transfer learning to enhance the practical potential of our approach.
This paper presents VisLanding, a monocular 3D perception-based framework for safe UAV (Unmanned Aerial Vehicle) landing. Addressing the core challenge of autonomous UAV landing in complex and unknown environments, this study innovatively leverages the depth-normal synergy prediction capabilities of the Metric3D V2 model to construct an end-to-end safe landing zones (SLZ) estimation framework. By introducing a safe zone segmentation branch, we transform the landing zone estimation task into a binary semantic segmentation problem. The model is fine-tuned and annotated using the WildUAV dataset from a UAV perspective, while a cross-domain evaluation dataset is constructed to validate the model's robustness. Experimental results demonstrate that VisLanding significantly enhances the accuracy of safe zone identification through a depth-normal joint optimization mechanism, while retaining the zero-shot generalization advantages of Metric3D V2. The proposed method exhibits superior generalization and robustness in cross-domain testing compared to other approaches. Furthermore, it enables the estimation of landing zone area by integrating predicted depth and normal information, providing critical decision-making support for practical applications.
[ "cs.CV", "cs.RO" ]
# 1. Introduction Foundation models have revolutionized domains such as natural language (Brown, 2020; Touvron et al., 2023), vision (Wang et al., 2023; Yuan et al., 2021), tabular data (Zhang et al., 2023b; Yang et al., 2024), and graphs (Liu et al., 2023; Tang et al., 2024), offering scalable and generalized frameworks for diverse tasks. However, foundation models for RDBs—where tables are interconnected through complex relationships—remain underexplored, despite the practical importance and widespread use of RDBs. An RDB foundation model is defined as a predictive model that works across diverse RDBs with varying sizes, schemas, and domains. Integrating RDBs into the foundation model paradigm presents challenges, including structural complexity, lack of processing pipelines, and unique computational patterns that differ from single-table data. While traditional machine learning and deep learning methods could handle single-table data pretty well (Chen & Guestrin, 2016; Ke et al., 2017; Prokhorenkova et al., 2018; Huang et al., 2020; Arik & Pfister, 2021; Gorishniy et al., 2021), they fail to address RDB-specific complexities. On the other hand, flattening relational data into a single table often results in significant information loss (Cvitkovic, 2020; Chepurko et al., 2020). Recently, graph-based methods using Graph Neural Networks (GNNs) (Kanter & Veeramachaneni, 2015; Cvitkovic, 2020; Bai et al., 2021; Zhang et al., 2023a; Wang et al., 2024; Robinson et al., 2024) attempt to capture these relationships but are limited to task-specific models rather than proposing a universal RDB foundation model. In this work, we introduce Griffin, a Graph-centric RelatIonal database FoundatIoN model attemptation, which integrates pretraining on both single-table and RDB datasets. Griffin is designed for broad generalization across diverse tasks and demonstrates superior performance over taskspecific GNN approaches through several key innovations. The first challenge is how to handle different task types (classification and regression) and different RDBs with disparate input feature spaces (categorical, numerical, textual, etc. of diverse semantic meanings). Griffin unifies input data encoders and task decoders. Unlike earlier methods that apply a single embedding layer to all categorical features and directly input raw numerical values, Griffin uses a pretrained text encoder for categorical inputs and a pretrained float encoder for numerical ones. It also incorporates RDB metadata, including table names, column names, and edge types, to distinguish tasks and capture the structure that connects them. In contrast to prior work that uses separate prediction heads for each task, Griffin applies a shared float decoder (pretrained jointly with the float encoder) for all regression tasks, and a unified classification head that integrates the text embeddings of target categories. This allows Griffin to manage classification tasks with varying numbers of categories and regression tasks with different ranges using a consistent architecture. Another key design consideration is selecting an effective GNN architecture for RDBs. Griffin addresses this by incorporating a cross-attention module that flexibly gathers information from cells within a row (treated as a node in the graph), mitigating the loss introduced by mean aggregation in standard GNNs. Additionally, Griffin enhances its message passing neural network (MPNN) by performing intra-relation aggregation before merging features across relation types. A further challenge involves leveraging large-scale data for training. To this end, we constructed a diverse and extensive dataset collection for both single-table and RDB tasks and developed a multi-stage pretraining and fine-tuning pipeline. Griffin is initially pretrained on single-table datasets using a random masked cell completion task that does not require labeled data. This is followed by joint supervised finetuning (SFT) on realistic tasks from both single-table and RDB datasets, and finally by task-specific fine-tuning and evaluation on each downstream RDB task. In total, the pretraining and SFT phases covered over 150 million nodes (rows), enabling the formation of large, heterogeneous, and temporal graphs across various domains to support the large model development. To assess the effectiveness of the model design and the impact of pretraining, Griffin was evaluated on two recent graph-centric RDB benchmarks: 4DBInfer (Wang et al., 2024) and RelBench (Robinson et al., 2024). The evaluation led to the following key findings: (1) Even without pretraining, the Griffin architecture achieves significant improvements on downstream tasks, demonstrating the advantages of its advanced design. (2) Pretraining and SFT solely on single-table datasets enables Griffin to outperforms its non-pretrained counterpart. (3) With SFT on RDBs of similarity and diversity to downsteam tasks, Griffin achieves even better results, particularly in scenarios with limited downstream task samples, highlighting its potential as a foundation model capable of transferring to downstream tasks with limited downstream supervision. In summary, Griffin represents a significant step forward in the development of foundation models for RDBs by combining robust generalization capabilities with architectural innovations that address the complexities of relational data. # 2. Preliminary A Relational Database $( R D B )$ is formally defined as a collection of tables, denoted by $\mathcal { R } \ : = \ : \{ \pmb { T } ^ { k } \} _ { k = 1 } ^ { K }$ , where $K$ represents the total number of tables, and each table $\scriptstyle { \pmb { T } } ^ { k }$ is structured as a matrix with $N _ { k }$ rows (instances) and $M _ { k }$ columns (features). The individual entry in the $i$ -th row and $j$ -th column of table $\scriptstyle { \pmb { T } } ^ { k }$ is represented by $\pmb { T } _ { i , j } ^ { k }$ . In our setting, these entries can take various forms, including numerical values, categorical values, text, or hash values used for indexing. A key characteristic of an RDB is the relationships between tables, which are defined by Primary Keys $( P K s )$ and Foreign Keys $( F K s )$ . A PK is a column within a table that uniquely identifies each row, while a FK is a column in another table that references the PK, thereby inheriting values from the corresponding PK. Let $R$ denote the total number of PK-FK pairs across all tables. The heterogeneous graph derived from an RDB is formally defined as $\mathcal { G } = ( \{ \mathcal { V } ^ { k } \} _ { k = 1 } ^ { K } , \{ \mathcal { E } ^ { r } \} _ { r = 1 } ^ { R } )$ , where the node set $\mathcal { V } ^ { k }$ of type $k$ is constructed from the rows of table $\pmb { T } ^ { k }$ , with each node corresponding to a row in the table. The feature vector of each node is the corresponding row in the table, and the edge set $\mathcal { E } ^ { r }$ of type $r$ is constructed from the $r$ - th PK-FK pair, which connects rows from the table with the FK to the referenced rows in the table with the PK. In this graph-based representation, almost all hash values used for indexing (such as those for PKs and FKs) are already encoded in the edge connection patterns. Therefore, we use only on the numerical, categorical, and textual features for the node attributes. A common approach in tabular data prediction is missing value prediction, where the goal is to infer a missing cell value using information from the same table or related tables. In many real-world applications, table rows are associated with a timestamp, denoted as $t _ { i } ^ { k }$ , and the number of rows can be very large. To maintain temporal causality and reduce memory usage, only rows with timestamps earlier than that of the target row are allowed for use in prediction. In the graph-based formulation of this task, the problem is rephrased as sampling a rooted subgraph that includes only nodes (rows) with earlier timestamps than the queried node. Let the target column value be represented as $\pmb { T } _ { i ^ { \prime } , j ^ { \prime } } ^ { k ^ { \prime } }$ , associated with the node $\mathcal { V } _ { i ^ { \prime } } ^ { k }$ . This node serves as the root of a temporal, rooted computation subgraph: $\begin{array} { r l } { \mathcal { T } ^ { ( L ) } } & { { } = } \end{array}$ $\big ( \{ \mathcal { V } ^ { ( l ) } \} _ { l = 0 } ^ { L } , \{ \mathcal { E } ^ { ( l ) } \} _ { l = 0 } ^ { L - 1 } \big )$ , where $\mathcal { V } ^ { ( 0 ) } = \bar { \{ ( \mathcal { V } _ { i ^ { \prime } } ^ { k } \ \backslash \ T _ { i ^ { \prime } , j ^ { \prime } } ^ { k ^ { \prime } } ) \} }$ is the root node, which does not include the target column’s feature. The set $\mathcal { V } ^ { ( l ) }$ contains nodes at hop $l$ from the root and only includes nodes satisfying that $t \stackrel { - } { < } t _ { i ^ { \prime } } ^ { k ^ { \prime } }$ . Each edge set $\mathcal { E } ^ { ( l ) }$ include edges connecting nodes in $\mathcal { V } ^ { ( l + 1 ) }$ to parent nodes in $\mathcal { V } ^ { ( l ) }$ , and satisfies: $\forall \mathcal { V } _ { p } ^ { ( l + 1 ) } \in$ $\mathcal { V } ^ { ( l + 1 ) }$ , $\exists \mathcal { V } _ { q } ^ { ( l ) } \in \mathcal { V } ^ { ( l ) }$ with $( \mathcal { V } _ { p } ^ { ( l + 1 ) } , \mathcal { V } _ { q } ^ { ( l ) } ) \in \mathcal { E } ^ { ( l ) }$ . Through this transformation, the RDB task of predicting a column value from multiple related tables is converted into a graph-based prediction problem over a temporally constrained rooted subgraph, defined as follows: • tIhneptuatr:geAt scaolmupmlend $\pmb { T } _ { i ^ { \prime } , j ^ { \prime } } ^ { k ^ { \prime } }$ e.d subgraph $\mathcal { T } ^ { ( L ) }$ constructed for • Output: Predicted value of the target column $\pmb { T } _ { i ^ { \prime } , j ^ { \prime } } ^ { k ^ { \prime } }$ . Figure 1: Overview of the Griffin Model Framework. The framework first transforms RDBs into a graph structure by representing each row as a node and using primary key–foreign key relationships as edges. Given a target column, a temporally constrained subgraph is sampled and processed using a unified encoder module before being passed to a MPNN. Finally, the unified task decoders generate predictions based on whether the task is classification or regression. # 3. Model Design In this section, we present the model design of Griffin, as illustrated in Figure 1. The framework consists of three main components for processing a sampled subgraph and generating prediction values: a unified data encoder, a MPNN tailored for RDBs, and a unified task decoder. # 3.1. Unified Data Encoder A core innovation of Griffin is the unification of different RDB input data. Previous RDB models typically use separate embedding layers for categorical features and direct numerical inputs, which makes them difficult to generalize to new data. In contrast, Griffin handles all categorical and text features using a pre-trained text encoder, while numerical features are normalized and processed with a pre-trained float encoder. This approach ensures that input distributions are more consistent across tasks. Additionally, Griffin uses RDB metadata and task-specific information to create task embeddings, allowing the same model to perform different tasks based on the task embedding provided at the input. Categorical and Textual Feature For categorical features, we first convert them into text representations using the metadata from the RDB. Both categorical and text features are then passed through a single, pre-trained text encoder (Nussbaum et al., 2024). Each feature (or “cell”) is encoded into a fixed-length vector, which captures rich semantic information. Cosine similarity between these vectors allows us to measure the similarity between different texts. Numerical Feature For numerical data, to avoid issues with extreme values, we first apply a quantile normalizer (Bolstad et al., 2003) to standardize the numerical values, transforming the distribution to normal. We then use a pre-trained Multi-Layer Perceptron (MLP) to convert the normalized values into the same $d$ -dim embedding vectors. To train the MLP, we sample $x$ from a normal distribution: $$ \begin{array} { r } { w = \mathrm { E N C } ( x ) \in \mathbb { R } ^ { d } , \quad y = \mathrm { D E C } ( w ) \in \mathbb { R } , } \end{array} $$ where ENC encodes the float into an embedding, and DEC decodes the embedding back to a float value. The model is trained using L1 loss $( | y - x | )$ , and we apply LayerNorm (without affine weights) to the encoder’s output to prevent collapse. After pretraining, the encoder and decoder are fixed and do not participate in the training of the Griffin model. During inference, the numerical input is first normalized, then passed through the encoder. Metadata Information Griffin also incorporates flexible encoding of RDB metadata, such as table names, column names, and edge types. This metadata is encoded with the text encoder to provide additional node and edge features. Task Representations These unified input encoders map inputs from different tasks to the same space. However, the model also needs to account for the syntactic differences between tasks. For example, if two missing cells in the same row are predicted without additional task-specific embeddings, the model’s input for both tasks would be identical, leading to the same output representation and poor expressivity. To address this, we introduce a task embedding. This embedding is generated by the text encoder using the column name of the cell to be predicted as input, allowing the model to produce distinct embeddings for different tasks. With all components unified across different types of data, the sampled subgraph is extended as follows: • Input: A sampled rooted subgraph $\mathcal { T } ^ { ( H ) }$ constructed for the target column Tik′,′j′ . • Output: An enriched rooted subgraph in which each node $i$ is associated with a feature tensor $x _ { i } \in \mathbb { R } ^ { L _ { i } \times d }$ and a metadata tensor $m _ { i } \in \mathbb { R } ^ { L _ { i } \times d }$ , where $L _ { i }$ is the number of cells in the node (which may vary between nodes). Each edge $( i , j )$ of relation type $r$ carries a relation-specific metadata vector $e _ { r } \in \mathbb { R } ^ { d }$ . Additionally, a task embedding vector $t \in \mathbb { R } ^ { d }$ is provided to represent task-specific information. # 3.2. MPNN Architecture The MPNN of Griffin is composed of multiple layers, each containing two key components: a cross-attention module that extracts information from node features and a messagepassing module that facilitates information exchange between nodes. The intermediate embedding of node $i$ across layers is maintained as $u _ { i }$ , while the final output of the model is the representation of the target node, denoted as $z$ Cross Attention Module The node feature vector $x _ { i } \in$ RLi×d presents three challenges: • The number of cells $L _ { i }$ varies across nodes, so the encoder must handle variable-length inputs. • $x _ { i }$ contains rich information, some of which may not be relevant to the task. The encoder must selectively focus on task-relevant information. • In RDB data, the column order is meaningless, so the encoder should be invariant to column permutation for better generalization. To address these, Griffin introduces a cross-attention module that allows the model to selectively focus on relevant information from individual cells within a row (treated as a node in a graph). This enables the model to capture interactions between columns and rows, modeling complex dependencies critical for relational data analysis. Each row in a table is represented as an attention-based aggregation of its column data: $$ \boldsymbol { v } _ { i } ^ { l } = \mathrm { A t t e n t i o n } _ { l } \left( \mathrm { Q M L P } _ { l } ( u _ { i } , t ) , m _ { i } , x _ { i } \right) , $$ where $l$ is the layer index, $v _ { i } ^ { l }$ is the output of the attention mechanism for node $i$ , and ${ \bf Q } { \bf M } { \bf L } { \bf P } _ { l }$ is an MLP that takes the node representation $u _ { i } \in \mathbb { R } ^ { d }$ and task representation $\boldsymbol { t } \in \mathbb { R } ^ { d }$ as inputs to produce the query for cross-attention. The keys for cross-attention are the metadata $m _ { i } \in \mathbb { R } ^ { L _ { i } \times d }$ (column names) of node i, and the values are xi ∈ RLi×d, the input node features. The result, $v _ { i } ^ { l }$ , is added to $u _ { i }$ to update the node representation. This cross-attention module overcomes the information loss typically seen in traditional GNNs, which often aggregate different columns using simple methods like mean aggregation. By focusing on specific cells in a row and attending to their contextual relationships, the module improves the model’s ability to extract nuanced information, enhancing task performance. Hierarchical Aggregation Along with the cross-attention module, Griffin enhances its MPNN to reduce information loss. Instead of aggregating all neighbors uniformly, Griffin first aggregates information within each relation type and then combines features across different relations. This hierarchical aggregation helps preserve the structure of relational data by ensuring that information is aggregated within each relation (e.g., a specific table or type of relationship) before being combined across multiple relations. This approach prevents the loss of important relational context and helps the model learn more informative representations. Specifically, in cross-table modeling, Griffin uses a temporal heterogeneous graph representation of the RDB, where rows are modeled as nodes. These node embeddings are propagated and updated via a GNN. The embedding for a node $i$ at the $l$ -th layer, denoted as $h _ { i } ^ { l }$ , is updated as follows: $$ h _ { i } ^ { r , l } = \mathbf { M e a n } _ { l } \left( \mathbf { A M L P } _ { l } ( u _ { j } ) \mid ( i , j ) \in \mathcal { E } ^ { r } \right) , $$ $$ \begin{array} { r } { h _ { i } ^ { l } = \mathbf { M a x } _ { l } \left( h _ { i } ^ { r , l } \odot e _ { r } \mid r \in R \right) , } \end{array} $$ where AMLP is an MLP that transforms the aggregated node representations. First, the representations of neighboring nodes are averaged within each relation. Then, the maximum aggregation is applied across all relations. This step ensures that the representations from each relation are not overwhelmed by others, which can be problematic when the number of neighbors across relations is unstable. $h _ { i }$ is further added to $u _ { i }$ to update node representations. The subgraph is encoded to one vector by MPNN: • Input: The enriched rooted subgraph and task vector. • Output: A fixed-length vector $\boldsymbol { z } \in \mathbb { R } ^ { d }$ . # 3.3. Unified Task Decoder Given a fixed-length embedding from the MPNN output, we apply a single task decoder per task type. Classification tasks We directly use the text embeddings of the target labels as the classification head. For example, when predicting the value of the $( i , j )$ -th cell in a table, $z _ { 1 } , z _ { 2 } , \ldots , z _ { c } \in \mathbb { R } ^ { d }$ denote the text embeddings of all candidate categories, and let $z \in \mathbb { R } ^ { d }$ be the output vector from Griffin. The prediction probability distribution is: $$ \operatorname { s o f t m a x } ( [ \langle z , z _ { i } \rangle \mid i = 1 , 2 , \dots , c ] ) , $$ where the logit for each category is obtained by the inner product between the output vector $z$ and the corresponding category embedding $z _ { i }$ . Regression tasks The output vector is passed through a pretrained number decoder, denoted as DEC, to produce the predicted value. The final output can then be denormalized according to the specifications of the downstream task. Different tasks may share similar label embeddings or number distributions, allowing the model to better capture taskspecific characteristics and adapt to new tasks. Given the decoder design, the final prediction step is defined as: • Input: A fixed-length vector $\boldsymbol { z } \in \mathbb { R } ^ { d }$ . • Output: The predicted value for the target column $\pmb { T } _ { i ^ { \prime } , j ^ { \prime } } ^ { k ^ { \prime } }$ . # 4. Training Pipeline In this section, we describe the training pipeline of Griffin, which consists of pretraining and downstream task finetuning stages. The pretraining phase includes two components: Completion Pretraining and Joint Supervised FineTuning (SFT). Both are designed to remain independent of the downstream tasks to avoid task-specific bias. The final stage involves task-specific fine-tuning, where Griffin is adapted to individual downstream tasks. # 4.1. Pretraining Griffin is pretrained on a diverse set of datasets to ensure effective generalization across various RDB tasks. The pretraining process has two main components: • Single-Table Datasets: These are used to train the model on tasks involving individual tables, providing a foundational understanding of tabular data. • RDB Datasets: Griffin is also pretrained on large-scale, heterogeneous, and temporal graphs derived from multiple RDB domains. These graphs capture data structures. By using both single-table and relational data, Griffin learns to generalize across different types of RDBs, making it adaptable to a wide variety of tasks. To fully use rich sourced datasets, we include two stages for pretraining: completion pretraining and joint SFT. Completion Pretraining We first use a completion task similar to language modeling but adapted for the tabular domain. The model learns to predict masked values within a row based on the remaining data. For a given row where one column is randomly masked, a column-invariant row encoder is used to generate the masked row’s embedding, which is used to predict the masked value. Formally, for a row $T _ { i , : } ^ { k }$ with a target column $j ^ { \prime }$ to be predicted, the pretraining objective is defined as: $$ \begin{array} { r } { \mathrm { l o s s } = 1 - \cos \left( \mathrm { M o d e l } _ { \theta } \left( \mathbf { { T } } _ { i , : \setminus j ^ { \prime } } ^ { k } \right) , \mathrm { E n c o d e r } \left( \mathbf { { T } } _ { i , j ^ { \prime } } ^ { k } \right) \right) , } \end{array} $$ where $\begin{array} { r } { \mathbf { M o d e l } _ { \theta } \left( \pmb { T } _ { i , : \backslash j _ { \cdot } ^ { \prime } } ^ { k } \right) } \\ { \mathbf { \Lambda } _ { / = \overline { { \imath } } . } \mathbf { \Lambda } _ { \setminus } ^ { \prime } \left( \pmb { T } _ { i , : \backslash j _ { \cdot } ^ { \prime } } ^ { k } \right) } \end{array}$ generates the row embedding and Encoder $\left( T _ { i , j ^ { \prime } } ^ { k } \right)$ prov\ides the true embedding for the masked column. The objective minimizes the cosine distance between the predicted and true embeddings. Joint Supervised Fine-Tuning Following completion pretraining, Griffin is jointly fine-tuned on selected realistic tasks to align it more closely with real-world tabular tasks. This stage utilizes both labeled single-table datasets and carefully selected RDB datasets, ensuring no data leakage into downstream evaluations. The fine-tuning process optimizes the model for a set of related tasks, leveraging pretrained knowledge while adapting to the specific needs of tabular tasks. Griffin’s task modeling framework, which supports both classification and regression in a graph-based RDB representation, employs a unified decoder (as in Section 3.3) to map output embeddings to task predictions. Cross-entropy loss is used for classification tasks, while L2 loss is for regression tasks. # 4.2. Downstream Task Fine-Tuning After completing pretraining and joint SFT, Griffin is finetuned on each individual downstream task for evaluation. This process follows the specific pipeline requirements of each benchmark to ensure a fair comparison with baselines. # 4.3. Model Variants Considered In practice, we consider three model variants based on the datasets used during training: Griffin-unpretrained refers to the model without any pretraining. This variant differs from other GNN baselines only in its architectural design, with no exposure to external data prior to downstream training, aiming to reveal the advantages of Griffin model design. Griffin-pretrained refers to the model pretrained exclusively on single-table datasets. We use a single pretrained checkpoint for fine-tuning across all downstream tasks. This checkpoint is strictly disjoint from any downstream tasks, ensuring there is no data leakage. This configuration isolates the effect of pretraining and highlights the potential for building a general-purpose foundation model for RDBs. Griffin-RDB-SFT is trained on a combination of singletable and RDB datasets. Since RDB datasets are used for both pretraining and downstream fine-tuning, we maintain multiple checkpoints, each trained on subsets of the RDB data that are non-overlapping with the downstream tasks. This setup also enables us to investigate the transferability of knowledge across different RDBs. # 5. Related Work # 5.1. Tabular Predictive Tasks Tabular predictive tasks involve learning to estimate missing or target values in structured tables. These tasks typically include classification and regression, using available features from the same table or from related tables. Models are trained to capture statistical patterns within rows, across columns, and across multiple tables when relational data is present. Our model focuses specifically on this type of task. Single Table Models Research on single table data has evolved through various approaches. Traditional methods, such as XGBoost (Chen & Guestrin, 2016), LightGBM (Ke et al., 2017), and CatBoost (Prokhorenkova et al., 2018), have been widely adopted due to their scalability and strong performance on structured data. More recently, transformerbased methods like TabTransformer (Huang et al., 2020), TabNet (Arik & Pfister, 2021), FT-Transformer (Gorishniy et al., 2021), and SAINT (Somepalli et al., 2021) have leveraged attention mechanisms to capture complex relationships within rows and columns. Additionally, graph-based methods such as GRAPE (You et al., 2020), TabularNet (Du et al., 2021), TabGNN (Guo et al., 2021), and CARTE (Kim et al., 2024) represent tabular data as graphs, incorporating multiplex and hypergraph structures to model interactions between rows and columns more effectively. Other works have explored improved encoding strategies for numerical features (Gorishniy et al., 2022; Yarullin & Isaev, 2023), while some have highlighted the benefits of incorporating nearest-neighbor information (Gorishniy et al., 2023; Ye et al., 2025). Although these models enhance feature interaction modeling, they primarily focus on single-table datasets and typically fail to model relational dependencies across multiple tables. RDB Models RDBs extend the concept of single-table models by incorporating multiple interrelated tables, requiring models to capture both intra- and inter-table relationships. Early approaches, such as DFS (Kanter & Veeramachaneni, 2015) and RDBTOGRAPH(Cvitkovic, 2020), attempt to flatten RDBs into a single table or apply GNNs to model relationships between tables. Other works, like ATJNet (Bai et al., 2021) and KEN (Cvetkov-Iliev et al., 2023), use hypergraphs and knowledge graphs to model inter-table dependencies, while GFS (Zhang et al., 2023a) integrates differentiable single-table models as embedding functions to preserve table structures. Some methods that convert structured data into unstructured embeddings can still retain structural information (Grover & Leskovec, 2016), such as EmbDi (Cappuzzo et al., 2020) and RDF2Vec (Ristoski & Paulheim, 2016). As RDB tasks have attracted increasing attention (Fey et al., 2024), more comprehensive benchmarks and toolboxes have emerged. For example, 4DBInfer (Wang et al., 2024), RelBench (Robinson et al., 2024; Fey et al., 2023), and PytorchFrame (Hu et al., 2024) propose complete pipelines for converting RDBs into graph structures that can be used for GNN-based models. More recent efforts (Yuan et al., 2024; Chen et al., 2025) aim to design more expressive GNN architectures for relational data. These models perform well on individual RDB tasks, whereas Griffin is designed towards a foundation model that aims to generalize across a wide range of relational tasks. # 5.2. Table QA Tasks Table question answering (QA) tasks focus on answering natural language queries by reasoning over tabular data. Given a question and a table (or a set of tables), the model must interpret the query, identify relevant cells, and either extract or compute the correct answer, or generate an executable SQL query. These tasks require both natural language understanding and structured data reasoning. TaPas (Herzig et al., 2020) enhances BERT with a tableaware encoder. Tapex (Liu et al., 2021) explores learning a neural SQL executor. OmniTab (Jiang et al., 2022) introduces pretraining using both synthetic and natural datasets. TableGPT2 (Su et al., 2024) treats tabular data as a distinct modality for building general-purpose models. Numerous benchmarks have been proposed for comprehensive evaluation (Yu et al., 2018; Lei et al., 2024; Chen et al., 2019; Wu et al., 2024; Li et al., 2023; Qiu et al., 2024). # 5.3. Foundation Models for Predictive Tasks Graph Foundation Models (GFMs) aim to pretrain large models that generalize across multiple graph datasets and tasks. Many GFMs, such as OFA (Liu et al., 2023) and Tape (He et al., 2023), integrate Large Language Models (LLMs) to enhance feature spaces or assist in training GNNs. Other methods, like UniGraph (He & Hooi, 2024), adapt graph data for better LLM integration. While some GFMs, such as GraphText (Zhao et al., 2023), convert graph structures into language-like representations for processing by LLMs, others focus on novel GNN architectures, such as GraphAny (Zhao et al., 2024). Griffin builds on the GFM paradigm but adapts it to RDBs by pretraining on both single-table and multi-table data, incorporating advanced tabular-specific data encoders and graph-based components such as cross-attention to model table meta-information, making Griffin more suitable for RDBs compared to GFMs. Tabular Foundation Models (TFMs) aim to generalize across tabular data, often leveraging transformer-based architectures. Models such as TaBERT (Yin et al., 2020) and TabLLM (Hegselmann et al., 2023) integrate text and tabular data to enhance table structure understanding, while TransTab (Wang & Sun, 2022) and XTab (Zhu et al., 2023) explore transfer learning across tables with varying column structures. UniTabE (Yang et al., 2024) and TPBerta (Yan et al., 2024) employ specialized tabular encoders to better align transformers with tabular formats. TabPFN (Hollmann et al., 2022; 2025) takes a different approach by avoiding the use of text models and instead pretraining on a large number of synthetic datasets. It achieves strong performance in few-shot settings. However, these models primarily focus on single-table data and lack mechanisms to capture intertable relationships in RDBs. While Griffin incorporates transformer-based and tabular techniques, it extends beyond existing TFMs by explicitly modeling relational structures across multiple tables, addressing the complexities in RDBs. Figure 2: Performance Comparison of Fully Fine-Tuned Models on Individual Tasks. This figure compares the performance of four GNN baselines, four single-table baselines with DFS, and two Griffin variants, each fine-tuned on individual tasks. The leftmost subfigure presents the average rank across all tasks. The remaining subfigures group tasks by evaluation metric, with results averaged accordingly. All values are positive; higher values indicate better performance for Accuracy and ROC-AUC, while lower values are better for left ones. # 6. Experiments In this section, we aim to address the following questions: Q1: Can Griffin, with its advanced design, outperform existing models under the same training settings? Q2: Can utilizing a single pretrained checkpoint universally enhance predictive performance? Q3: Can joint SFT with RDB improve transferability, and under what conditions does it provide the most benefit? # 6.1. Experimental Setup The experimental setup is designed to evaluate Griffin across diverse tasks and datasets, leveraging the pretraining and fine-tuning pipeline described in Section 4. Datasets The selected datasets include both single-table and RDB datasets, with details provided in Appendix A. • Single-Table Datasets: Over 200 datasets were curated from TPBerta (Yan et al., 2024) and CARTE (Kim et al., 2024), comprising approximately 10 million rows. These datasets were used for completion pretraining, enabling scalable learning without human-labeled data. Only 50 datasets contained labels for joint SFT. While additional large-scale datasets from diverse domains were collected, they were excluded from pretraining for two key reasons: (1) many were subsets of RDBs, making single-table pretraining ineffective, and (2) their distributions diverged significantly from downstream RDBs. • RDB Datasets: We sourced large-scale temporal RDBs from two leading benchmarks, 4DBInfer (Wang et al., 2024) and RelBench (Robinson et al., 2024), covering a wide range of domains, scales, and tasks. A total of 24 tasks were selected for SFT and downstream evaluation. Baselines To ensure a fair comparison across benchmarks, we standardized evaluation-related settings, which led to certain modifications in the reported results. These adjustments include aligning preprocessing steps, normalization strategies, and other evaluation procedures. As a result, some baseline results may differ from those originally reported in the respective benchmarks. We include four GNN baselines: SAGE, GAT, PNA, and HGT. Additionally, we evaluate four single-table models enhanced with the Deep Feature Synthesis (DFS) method (Kanter & Veeramachaneni, 2015) to incorporate multi-table information. For evaluations that involve only single-table data without any relational context, Griffin Griffin-avg-attention Griffin-mean-GNN Rank ROC-AUC MAE Logloss 3.0 0.70 1.00 0.60 2.6 0.68 0.92 0.58 2.2 0.64 0.76 0.54 1.8 1.4 1.0 0.60 0.60 0.50 we recommend referring to the original experimental results reported in 4DBInfer and RelBench, which have already demonstrated significantly weaker performance in the absence of multi-table information. Further details on these modifications and baseline configurations are provided in Appendix B. Hyperparameters and Training Griffin was trained with fixed hyperparameters across all experiments to ensure robustness. During pretraining, all single-table datasets were used for completion pretraining, while subsets of singletable and RDB datasets were selected for joint SFT based on specific experimental objectives. Further details about model settings are provided in Appendix C. Figure 2 presents the performance comparison of different models fully fine-tuned on individual tasks. Griffinunpretrained outperforms all other models in average rank and demonstrates significant improvements. To analyze the impact of key design choices, we conducted an ablation study on the cross-attention module and aggregation functions in MPNN, with results presented in Figure 3. Replacing these components with a plain average of column features and a mean-only aggregator for both intra-type and inter-type nodes results in a significant performance drop. # 6.2. Reply to Q1: Meta-Information and Advanced Architecture Design Enhance Model Performance completion pretraining and joint SFT, both conducted exclusively on single-tabular datasets. Notably, no RDB datasets are included in pretraining, ensuring no data leakage while demonstrating the adaptability of the pretraining framework across different domains. Figure 2 presents the performance comparison between Griffin-Pretrained and Griffin-unpretrained without pretraining across multiple tasks. The results show that GriffinPretrained outperforms its non-pretrained counterpart, validating the universal benefits of pretraining. These findings confirm that pretraining a single checkpoint on diverse single-tabular datasets can significantly improve predictive performance, even for tasks in RDBs, despite the absence of RDB-specific data during pretraining. # 6.3. Reply to Q2: Single-Table Pretraining Universally Enhances Model Performance To answer Q2, we introduce Griffin-Pretrained, a variant of Griffin that undergoes a pretraining stage before taskspecific fine-tuning. The pretraining process consists of # 6.4. Reply to Q3: Joint SFT with RDB Enhances Transferability, Driven by Similarity or Diversity To analyze the factors influencing transferability, we propose two key hypotheses, building on recent research on transferability (Ehrig et al., 2024). • Similarity. Pretraining on datasets similar to the downstream task improves transferability by providing aligned feature representations and task structures. • Diversity. Pretraining on a broader and more diverse set of datasets enhances transferability by improving the model’s ability to generalize across different domains. To test these hypotheses, we categorized datasets into two broad domains: commerce and others, each containing a diverse set of tasks. We further split each domain into two subsets, leading to four final groups: Commerce-1, Commerce-2, Others-1, and Others-2. • Commerce: Commerce-1 and Commerce-2 originate from e-commerce datasets, covering tasks such as user churn prediction and purchase rate estimation. These datasets are highly similar. • Others: Others-1 and Others-2, despite belonging to the same broad domain, exhibit significant internal diversity due to their inclusion of sports, social networks, flight records, and clinical data, leading to varying distributions. To evaluate transferability, we performed joint SFT on one group and tested it on each task of another group using limited-sample fine-tuning, making transfer effects clearly visible. To ensure robustness, each task was evaluated across five different random seeds for split selection. We compared two types of models to assess transferability: • No-Pretrain Model – Only trained on downstream tasks. • Griffin-RDB-SFT – Griffin pretrained on both singletabular and selected RDB datasets. The results, presented in Figure 4, provide insights into how similarity and diversity influence transfer learning. Figure 4: Evaluating Transferability Across Different SFT Domains. This figure compares the impact of different SFT strategies on transferability. Each subfigure presents four models: a no-pretraining baseline and three models pretrained on single-table data followed by SFT on different domains. By comparing performance relative to the no-pretraining baseline, we observe positive transfer effects when the SFT and downstream tasks are similar, as seen in commerce-to-commerce settings. Additionally, SFT on the more diverse “Others” group improves performance on the “Commerce” domain. Impact of Similarity on Transferability To evaluate the role of similarity, we first analyze transfer performance between the commerce groups. The results show that both Commerce-1 to Commerce-2 and Commerce-2 to Commerce-1 benefit significantly from pretraining. Notably, pretraining on Commerce-2 and transferring to Commerce-1 outperforms all other pretraining domains, indicating that pretraining on highly similar datasets results in stronger transfer benefits. Conversely, when transferring from commerce to non-commerce domains, the lack of similarity leads to poor transfer performance. In these cases, pretraining often underperforms compared to the no-pretrain baseline, resulting in a substantial performance gap. This trend confirms our Similarity Hypothesis—the closer the pretraining dataset is to the downstream task, the stronger the transferability. Impact of Diversity on Transferability Next, we examine the role of diversity in transfer learning. When transferring from others to commerce, pretraining consistently outperforms the no-pretrain model, and in some cases, even surpasses the results of similar-domain commerce-tocommerce transfer. This suggests that diverse pretraining can sometimes be as effective as, or even better than, pretraining on similar datasets. For transfering between Others-1 and Others-2, where similarity is low, we observe a one-directional transfer benefit—Others-1 effectively transfers to Others-2, but not vice versa. We hypothesize that this is due to greater dataset diversity in Others-1, which provides a broader pretraining foundation that improves generalization when fine-tuned on Others-2. These findings confirm our Diversity Hypothesis—the more diverse the pretraining dataset, the stronger the model’s ability to generalize. Additionally, we conducted further experiments on different joint SFT strategies, including SFT with limited samples and mixed SFT with single-tabular datasets, as detailed in Appendix D. The results show that full SFT achieves the best performance, reinforcing the benefits of complete pretraining. Furthermore, despite variations in SFT strategies, the observed transferability patterns across domains remain similar. This further validates the robustness of our domain transferability hypothesis and highlights the fundamental role of similarity and diversity in effective transfer learning. We also conducted few-shot experiments compared with TabPFNv2 combined with DFS as a reference, as detailed in Appendix E. TabPFNv2 (Hollmann et al., 2025) is a powerful single-table model that supports few-shot learning and even outperforms some state-of-the-art models on certain datasets. Although DFS is not ideal for few-shot scenarios and involves substantial preprocessing time, we include it for comparison. The results show that Griffin and TabPFNv2 each excel under different conditions.
We introduce Griffin, the first foundation model attemptation designed specifically for Relational Databases (RDBs). Unlike previous smaller models focused on single RDB tasks, Griffin unifies the data encoder and task decoder to handle diverse tasks. Additionally, we enhance the architecture by incorporating a cross-attention module and a novel aggregator. Griffin utilizes pretraining on both single-table and RDB datasets, employing advanced encoders for categorical, numerical, and metadata features, along with innovative components such as cross-attention modules and enhanced message-passing neural networks (MPNNs) to capture the complexities of relational data. Evaluated on large-scale, heterogeneous, and temporal graphs extracted from RDBs across various domains (spanning over 150 million nodes), Griffin demonstrates superior or comparable performance to individually trained models, excels in low-data scenarios, and shows strong transferability with similarity and diversity in pretraining across new datasets and tasks, highlighting its potential as a universally applicable foundation model for RDBs. Code available at https://github.com/yanxwb/Griffin.
[ "cs.LG", "cs.AI", "cs.DB" ]
# 1 Introduction Generalization is a fundamental concept across various branches of science. In the realm of machine learning, the notion of model generalizability serves as a measure of how effectively a model, learned from a limited set of training samples, can extend its performance to unseen data during testing. We study the problem of generalizability in transforming tables for joins, where the task is to transform data from one table formatting to another to make them equi-joinable, using only a limited sample of joinable row pairs. In this context, the transformations are used to explain and validate the join process and the generalizability of the transformations signifies the capacity to accurately represent the true mapping rather than capturing some form of accidental regularity in the data. Also, a more general transformation will garner more supporting evidence from the data and may be deemed more reliable, as opposed to a transformation that may capture random noise in the data. The problem of transforming tables has been explored in the literature [8,9,30,18], with some variations in the supported set of transformations. However, the generality aspects of these transformations have not been investigated in these studies. S. Omidvartehrani et al. Fig. 1: Example of joinable tables and transformations Motivating Example: A data analyst is tasked to integrate information about faculty members from various sources to create comprehensive profiles. Consider the tables presented in Figure 1 from three different sources. Despite describing the same entities, these tables are not equi-joinable due to formatting mismatches. The manual construction of a mapping between these tables can be both time-consuming and may necessitate domain knowledge that the analyst might not possess. Suppose the analyst aims to map the instructor name in Table (a) to the contact email in (b) and the author name in (c). Leveraging a recent work on transforming tables [18], referred to as CST and regarded as state-of-the-art for its improvement upon previous works, the analyst obtains three transformations, denoted as $T _ { 1 }$ , $T _ { 2 }$ , and $T _ { 3 }$ to transform the instructor name in Table (a) to email address in (b), as shown in the Figure. $T _ { 1 }$ caters to the first row, extracting the first character of the first name, concatenating it with the last name, and adding a literal to generate the email address in the target table. $T _ { 2 }$ follows a similar approach, but accommodates the second row, which includes a middle name. $T _ { 3 }$ constructs the email based on the last name only and is applicable to the third row. However, with each transformation applicable to only a single row, the generality of them can be questioned. On the other hand, the transformation $T _ { G }$ , which utilizes optional fields and relative indexes, works universally for all three rows, offering a higher level of generality. One could argue that $T _ { G }$ better captures the underlying pattern and is more readable. The same rationale applies to the mapping of (a) to (c), where CST identifies two transformations to map the tables, whereas a more general transformation, denoted as $T _ { H }$ , exists, covering all table rows1. Problem Studied: We prefer more general transformations that cover a larger number of input rows, as they are more likely to cover unseen data, may better show the underlying patterns in the data, and make it easier for a data scientist to validate the join process. However, there has been limited exploration into which principles of generality can be applied to transformations and whether these generalizations are effective when applied to tabular data. We study this problem in the context of transforming tables for explainable joinability, with the joined columns typically describing the same entities. Our Approach: Our work aligns with the line of work on program synthesis, aiming to discover cell transformations that enable an equality join. It also falls within the realm of explainable join, where transformations explain the reasons behind a join [30,18], albeit the discovered transformations may also aid in predicting missing values [8,22], conducting schema mapping [4,2], and performing data cleaning and repairing [10]. Our work explores several generalization concepts in transforming tables, incorporating those observed in our motivating example, and explores their impact on reducing the number of transformations while maintaining or improving their coverage. Our evaluation on two real-world datasets obtained from different sources reveals that generalization techniques significantly enhance transformation coverage, reduce the number of transformations required to map all rows, better address unseen data, and enhance the end-to-end result of the joinability task. Our contributions can be summarized as follows: (1) We explore the impact of generalization on transforming tables for explainable joinability, aiming to improve the adaptability and coverage of transformations. (2) We expand the domain of transformations by incorporating generalization concepts, including length invariance, repeating patterns, and bi-directionality. (3) Our comprehensive experiments unveil the superior performance of our method, compared to a state-of-the-art approach, across various metrics, including the coverage of transformations, the number of required transformations for mapping, and the ability to cover previously unseen data.2 # 2 Related Works The pertinent works related to ours can be categorized into (1) finding related tables and joinable row pairs, (2) discovering example-driven table transformations to enable an equi-join, and (3) model generalization. (1) Finding Related Tables and Joinable Row Pairs: When data to be integrated are located in a data lake, a common initial step involves identifying related tables and potential joinable columns. Finding related tables is studied within the context of joinable tables [29,31,3], unionable tables [17,15], and tables that are semantically similar [26,27]. With relevant tables retrieved, many approaches, including ours, rely on a set of provided examples–whether humanprovided or automatically generated–to perform a tabular join through some transformations. The problem of finding joinable row pairs where the matching pairs are not equi-joinable is well-studied [5,24,28,25], with methods ranging from basic textual similarity functions [5] to bipartite graphs of tokens and their similarity [24,25], and deep learning-based entity matching techniques [28,14]. While the predominant focus of this line of work revolves around matching relevant rows, our approach is centered on the generalization of explainable transformations that render two rows equi-joinable. Hence these approaches can serve as preprocessors in our approach, aiding in the generation of examples. (2) Discovering Example-Driven Table Transformations: The transformation of cells within a table is recognized as a crucial operation in tabular data integration [19,11], and the discovery of such transformations, guided by a set of examples, has been the subject of several studies [18,30,22,8,9]. FlashFill [8,9] and BlinkFill [22], the two pioneering contributions in this area, create an input graph based on user-provided examples and traverse the graph to generate transformations. Nevertheless, they heavily rely on accurate input examples and struggle when the examples are noisy. The huge search space and resourceintensive computation in FlashFill have inspired the subsequent works to prioritize a smaller search space over generalizability. For instance, Auto-join [30] employ predefined string-based units to generate transformations and a recursive backtracking algorithm to find an optimal one. Limiting the search space to a predefined set of transformations allows Auto-join to handle minor input noise. In a more recent study, Common String-based Transformer (CST) [18] constrains the search space by utilizing common text sequences between source and target examples as textual evidence to construct the skeleton of transformations. The primary focus of all the aforementioned approaches has been on identifying transformations that cover a given example set, with less focus on the generalizability of the transformations to newly added data and unseen examples. (3) Model Generalization: Within the broader context of our work, there exists a substantial body of literature on generalizing models for diverse inputs. Improving the generalization capacity of neural architectures, often confined to particular domains and tasks, is a well-studied topic [20,12,6,16]. In a closely aligned work, Shi et al. [21] study the compositional generalizability of transformer models, where they identify high-level categories of generalization, such as length generalization and mixing and matching of artifacts to improve the capacity of transformer models trained on simple subtasks to solve more complex tasks. Moreover, There is also a growing interest in meta-learning, enabling models to adapt to new tasks or domains by leveraging knowledge acquired from previous tasks. Some recent works [7,23,13] identify abstract features that prove useful across different problem domains, hence enhancing generalization capabilities. To the best of our knowledge, the generalization of cell transformations for explainable joinability and its impact on performance have not been previously explored. # 3 Preliminaries In this section, we discuss the necessary foundation for our framework and the methods that our approach builds upon. Finding Joinable Tables and Columns Row Matching Transformation Generation Generalization Explainable Join 晶 品 T: <U1, U2, ...> T1 Source Target Tables Table A Table B (a) (b) (c) (d) # 3.1 An End-to-End Explainable Join Algorithm Figure 2 depicts an overview of a typical unequal join framework. The process begins by identifying relevant tables and joinable columns, followed by the formation of source-target pairs for potentially joinable rows. In some cases, data scientists may provide example row pairs. String-based example-driven transformers such as CST [18] utilize the provided example set to construct transformation skeletons and generate a large set of candidate transformations along with their coverage. With candidate transformations and their coverage known, a transformation may be selected based on its coverage, or a minimal set of transformations covering all given example row pairs may be sought. Our work is on the generalization of the framework, where some strategies are applied either during or after the transformation generation phase, evolving them into more general ones, which can lead to better coverage and a reduced set of covering transformations. # 3.2 Cell Transformations A cell transformation (in short, a transformation) is a function that takes an input cell value represented in the source formatting and generates the representation of that value in a desired target formatting. We follow a recent line of work relying on cell transformations to enable unequal joins [30,18] where cell transformations are constructed as a sequence of basic operations referred to as units. Each unit is applied on a cell value in the source and generates a fraction of a value in the target. The outputs of multiple units may then be concatenated to form the full cell value in the target. The units are string-based functions that either copy part of the input or a constant literal to the output. A wide range of functions have been employed as units in the literature [30,18,9,22] and our work adopts four commonly occurring units from previous studies [30,18]: $- \ \operatorname { l i t e r a l } ( s t r )$ returns str as the output, – substr $( i , j )$ provides the substring starting at index $i$ and ending at index $j$ or the end of the input if the input length is shorter, $- \ \mathbf { s p l i t } ( c , t )$ tokenizes the input based on character $c$ and returns the $t ^ { t h }$ token, $\mathrm { ~ - ~ } \mathbf { s p l i t { S u b s t r } } ( c , ~ t , ~ i , ~ j )$ combines split and substring functions, with the input tokenized using character $c$ and subsequently substr $( i , j )$ is applied on the $t ^ { t h }$ token. One may question the significance of the splitSubstr unit when split and substr units already exist individually. The reason for these compound units is that transformations are formed as a concatenation of units, and stacking of the units is only possible by introducing new units (e.g., splitSubstr). For a given set of source and target pairs $( s _ { i } , t _ { i } )$ , the coverage of a transformation is defined as the fraction of pairs in which applying the transformation on $s _ { i }$ generates $t _ { i }$ . When a transformation can produce more than one output, the coverage is defined as the fraction of pairs in which applying the transformation on $s _ { i }$ leads to a set that includes $t _ { i }$ . When transformations are employed to enable a join between two tables that are not otherwise equi-joinable, each transformation is applied to a row in the source, producing one or more candidates that are then looked up in the target. # 4 Methodology In this section, we introduce our generalization strategies and discuss how they are applied to a base set of transformations. We also analyze their impact on the coverage and the overall quality of the transformations. # 4.1 Length Generalization Using Relative Indices The invariance of a table transformation to input length is a desirable property, as it is expected to mitigate overfitting, especially if the input length varies between the extracted examples given as context and those used for testing the transformations. Many parameters in a transformation make references to input using some indices, and those indices can be absolute or relative. Relative indices need an anchor point, which may be set to the beginning, the end or any other input position. All transformations introduced in Auto-join and CST use absolute indices. Those indices can be replaced with relative indices without changing their semantics if we set the anchor point the beginning of the input. However, one can change the anchor point of a relative index to a different location, allowing the expression of transformations that will not be possible otherwise. As possible anchor points, let $s$ and $e$ mark the beginning and the end of input respectively. Relative indices in a transformation can be stated with respect to either $s$ or $e$ , allowing two-way parsing of the input. For example, consider the first and second rows from the Course Catalog and Linkedin tables in our running example from Section 1, as depicted in Figure 1. The transformation <substr(0, 1), splitSubstr(‘ ’, 1, 0, 6), $l i t e r a l ( ^ { \cdot } @ u a l b e r t a . c a ^ { \prime } ) >$ , generated under CST, maps the first row but not the second because of the presence of a middle name. However, if we use relative indices, the transformation $< s u b s t r ( s , \ s + 1 )$ , $s p l i t S u b s t r ( \cdot \ ^ { \prime } , \ e , \ s , \ s + 7 )$ , literal(‘@ualberta.ca’)>, generated from the second row, will cover both rows. # 4.2 Recurrence Generalization Systematically handling repeating patterns is a generalization strategy that can reduce the complexity of integrating diverse data sources, improve the simplicity of mapping rules and potentially decrease the likelihood of errors in the integration process. In the context of table transformations, a unit that is applicable to an element in an input sequence may be applicable to all elements in the sequence. For example, consider a sequence of space-separated terms in a source column and a transformation that takes the first letter of each term and concatenates them together to produce the target. If the input length, in terms of the number of terms, is fixed at $k$ for all rows, one can construct a sequence of $k$ splitSubstr functions, each taking a term in the input sequence and outputting the first character of the term. However, this approach will not work if the input length varies from one row to the next. We introduce repeating patterns, in the form of unit repetition and unit removal, to address this problem. Unit Repetition Introducing repetition to transformation rules is not as straightforward as one may wish. This is because our transformation units use indices to indicate parts of input they operate on, and the behavior of those indices must be defined under repetition. For example, consider split(‘ ’, s) which breaks the input based on the space character and takes the first element. Repeating this operation without changing the index will produce the same output and is less desirable. Since we want to break the input into smaller segments and operate on each segment through repetition, we will focus on generalizing the split and splitSubstr functions3. Definition 1. We define split and splitSubstr under repetition as $$ s p l i t ( c , i ) ^ { r } = \{ s p l i t ( c , i + j ) | j \in [ 0 , r - 1 ] \} , $$ $$ s p l i t S u b s t r ( c , i , p , q ) ^ { r } = \{ s p l i t S u b s t r ( c , i + j , p , q ) | j \in [ 0 , r - 1 ] \} , $$ where $r$ is the repetition factor, c is separator character(s) breaking the input into segments, and $i$ , $p$ , and $q$ are indices, which can be both relative and absolute indices, and $i + j$ is constrained by the input length. The concept of repetition can be applied to either a single unit or to a sequence of consecutive units within a transformation. As an example, consider the first and second rows from the Course Catalog and Publications tables in Figure 1. The transformation $< s p l i t S u b s t r ( \mathit { \Omega } ^ { \prime } \ ^ { \prime } \ , \ s , \ s , \ s + { \cal I } )$ , literal (‘. ’), split(‘ ’, $e ) >$ maps the first row of the source table to the first row in the target table, but it cannot map the second row because of the presence of a middle name. However, the first two units can be applied multiple times under repetition. In particular, with at most two repetition of those two units, both rows are covered. Unit Removal As a special case of repetition, both being or not being a unit in the rule can generate valid outputs. Definition 2. We define split, substr, and splitSubstr under removal as $$ s p l i t ( c , i ) ^ { ? } = s p l i t ( c , i ) | \mathcal { D } , $$ S. Omidvartehrani et al. $$ \begin{array} { c } { { s u b s t r ( p , q ) ^ { ? } = s u b s t r ( p , q ) | \mathcal { O } , } } \\ { { s p l i t S u b s t r ( c , i , p , q ) ^ { ? } = s p l i t S u b s t r ( c , i , p , q ) | \mathcal { O } , } } \end{array} $$ where ? represents the removal concept, and the separator character and indices remain as previously defined. The transformation $T _ { G }$ applied to the second and third rows from the Course Catalog and Linkedin tables in Figure 1 illustrates this concept, where transforming the third row becomes possible by removing a subpart of a previously generated transformation. # 4.3 Source-Target Direction Generalization When dealing with numerous tables from different sources, determining which table should serve as the source and which as the target in the mapping process can be challenging. Ideally, we prefer a more content-rich table to act as the source, providing ample evidence to construct the target. However, identifying such content-rich tables may not be straightforward. As an example, consider Course Catalog and Linkedin tables in Figure 1. One heuristic for detecting the source involves selecting the more informative column based on length [18]. This implies choosing the column with longer text as the source and treating the column with shorter text as the target. While this strategy may work for some table pairs, it is evident that it would fail in selecting the source here. In this case, the email column is longer due to the literal at the end, causing it to be chosen as the source, whereas the name column is more descriptive. One potential generalization is to avoid fixing the source and target and allow mapping in both directions. One can dynamically choose the direction that yields more general transformations. In our case, we take a small subset of each table as a sample and generate transformations based on this sample for both directions. The direction that can map rows with fewer transformations is deemed as the source, as those transformations are expected to exhibit better generalization. # 4.4 Simplicity Generalization on Unseen Data In many real-world scenarios, table rows undergo updates, and new rows are regularly added over time. In such dynamic environments, there is a need to continuously incorporate new data in the join process. However, repeatedly generating transformations from scratch each time new data arrives can be a timeconsuming and resource-intensive task. A general transformation is expected to not only cover the existing rows, but also anticipate unseen rows that may arrive in the future. One approach to gauge the generality of a transformation is to estimate its coverage based on data observed thus far. A transformation that covers more rows is deemed more general. However, without comprehensive knowledge about the distribution of future data, it is generally challenging to say with certainty whether one transformation will offer greater coverage than another. A significant issue impacting the generality of transformations is the existence of accidental patterns in data. While such patterns may be present in a small sample, they are less likely to generalize to a larger sample or unseen data. Consequently, transformations constructed using a small sample may exhibit some of these accidental patterns. To mitigate such occurrences, we introduce simplicity as an additional measure of generality. In this context, simpler transformations are favored over more complex ones when all other factors, such as coverage on seen data, remain the same. In our case, a simplicity score may be defined in terms of either the number of units or the number of parameters in a transformation, where a transformation with fewer units or parameters is considered simpler. In our experiments, transformations are primarily chosen based on their coverage. In cases where there is a tie in coverage, a simpler transformation is favored over a more complex one. For example, consider two generated transformations with identical coverage on input data, where all their units are the same except for one. Suppose one transformation has a $s u b s t r ( i , ~ j )$ unit, while the other includes $s p l i t S u b s t r ( c , \ 0 , \ i , \ j )$ . When character $c$ is absent from the input data, both transformations have the same coverage. However, when considering transformation simplicity, we favor the former because if unseen data has a matching character $c$ by chance, only $s u b s t r ( i , j )$ may perform the correct mapping. # 5 Experiments Our evaluation is conducted on two real-world datasets gathered from diverse sources with varying levels of noise and inconsistency, providing opportunities to evaluate various aspects of our generalization techniques. (1) Web Dataset [30] is generated by sampling table-related queries from the Bing search engine and retrieving sets of tables from Google Tables corresponding to those queries. The tables in this dataset are manually selected as those representing the same entities with different formatting that are joinable via textual transformations. This dataset consists of 31 pairs of web tables, covering 17 diverse topics. On average, each table contains approximately 92 rows, and the average length of a join entry is 31 characters. It is a challenging benchmark due to inconsistencies in the data and the presence of different formatting and patterns across the rows of each key column. (2) Spreadsheet Dataset [1] is taken from the public transformation benchmarks in FlashFill [8,9] and BlinkFill [22]. This benchmark is formed by the data cleaning challenges reported by the spreadsheet users on Microsoft Excel forums. The dataset contains 108 table pairs, each with 34 rows and 19 characters per join entry on average. In our experiments, we set the maximum number of units and repetition degree to 3 and 2 respectively. We assume a manually annotated set of joinable row pairs is provided, as finding the matching pairs is beyond the scope of this study. Moreover, coverage values are averaged across all tables in the benchmark, and in cases where randomness affects the results, experiments are repeated at least 5 times, and the average is reported. Our baseline for comparison is CST [18], which, to the best of our knowledge, represents the state-of-the-art in finding explainable textual transformation. S. Omidvartehrani et al. (a) On all tables (b) On tables without full coverage trans. Table 2: The number of required transformations Table 1: Best transformation coverage Table 3: The impact of removal and repetition concepts # 5.1 Transformation Coverage In this experiment, our aim is to generate transformations that cover all rows in the input. Since the framework is designed to generate a covering set, the coverage in our experiments is $1 0 0 \%$ , for both our approach and the baseline. To evaluate the impact of our generalization, we introduce two alternative metrics to measure the coverage of transformations: (1) best transformation coverage, which indicates the coverage achieved by the transformation that produces the highest coverage on the input rows; and (2) the number of generated transformations, because, when a smaller number of transformations is required to cover all input rows, it serves as an indicator that the transformations are more general. Table 1a summarizes the best transformation coverage when applying each generalization technique, as well as that of the baseline approach. Table 1b demonstrates the same evaluation, excluding tables that are covered by a single transformation. These excluded tables are considered easy, as a single transformation with nongeneral units may also achieve full coverage, leaving no room for generalization. Finally, Table 2 shows the total number of generated transformations required to achieve a coverage of 1.00 on all tables in the benchmark. The numbers inside parentheses indicate the improvement compared to the baseline. We analyze the effect of each generalization strategy individually: Source-Target Direction Generalization: The “bidir.” column in the aforementioned tables denotes the performance when applying our bidirectional generalization ( 4.3). As demonstrated, our approach outperforms the baseline on the web dataset, which is a relatively more challenging dataset. This shows that the heuristic used in our baseline, which considers columns with lengthier rows as more context-rich, may not accurately label source and target columns. On the other hand, on the spreadsheet benchmark, our performance is on par with the baseline, since in this dataset, the source is always lengthier. Length Generalization: The utilization of relative indices considerably enhances the performance of our approach compared to the baseline, as observed across all metrics on both datasets. The “Rel.” column in Tables 1 and 2 quantifies this improvement. In each dataset, there are several tables covered with more transformations in the baseline due to the limitations of absolute indices in modeling length variance in the rows. Applying relative indices results in the generation of more general transformations, capable of covering more rows compared to their absolute counterparts. This can lead to up to a $1 0 \%$ improvement in the coverage of the best transformation and an 8% reduction in the number of required transformations, particularly on the challenging web dataset. Recurrence Generalization: Columns “+Rem.” and “+Rep.” denote the performance when augmenting relative-indexed transformations with unit removal and unit removal and repetition, respectively. As shown, there is a slight improvement in both the best transformation coverage and the number of required transformations when employing these generalizations. However, the best transformation coverage and the number of required transformations may not give the full picture when in fact our generalizations considerably enhance the coverage of transformations. Therefore, instead of relying solely on the best transformation coverage, we measure the improvement for each transformation individually, summarized in Table 3. For this evaluation, we compare the coverage of each transformation when only generalized by relative indices with those that are also augmented by recurrence generalization. Transformations with $1 0 0 \%$ coverage and those covering only a single row (i.e., typically literals) are excluded in this evaluation, as they cannot be generalized further. As shown, on the web dataset, both unit removal (denoted by “ $^ +$ Rem.”) and unit removal and repetition (denoted by “+Rep.”) generalize $2 0 \%$ or more of the transformations, and the coverage is increased by about 6%, which underscores the effectiveness and importance of recurrence generalization methods. # 5.2 Generalization to Unseen Data The improved coverage on unseen data in our approach is primarily achieved through length and simplicity generalization techniques. In our first experiment, we aim to measure the extent to which simplicity generalization can impact the generated transformations on each dataset. Specifically, we conducted the experiment using two tie-breaking strategies when transformations with equal coverage are to be selected. In one strategy, a simple transformation is prioritized over a complex one, and in the other strategy, the selection is made randomly. Our metrics for simplicity in one setting is the total number of units and in another setting is the total number of parameters. Simplicity-based tie-breaking results in at least a 5% reduction in the total number of units and a noticeable 17% decrease in the parameters needed. Not only do the simpler transformations lead to a faster join on large dynamic environments and provide an easier-tounderstand mapping, but, as confirmed by the next experiment, they also yield better coverage on unseen data. In the next experiment, we randomly divide the rows of each table into two sets. One set represents the available data based on which transformations are built, and the other set represents unseen data that may be added in the future. We use 10%, 20%, and 40% of the rows as the available data. For tie-breaking among transformations with equal coverage, two strategies are evaluated: simplicity-based and random selection. Table 4a shows the performance using random selection, and Table 4b indicates the same experiment using simplicity-based selection, for both the baseline approach and ours. S. Omidvartehrani et al. Table 4: The impact of generalization on transformation coverage when limited data is available (a) Tie-breaking: random (b) Tie-breaking: simplicity-based Two important observations can be made: (1) Regardless of the tie-breaking strategy employed, our approach consistently outperforms the baseline when measuring coverage on unseen data. The performance gap between the baseline and our approach widens when a smaller set of data is available for transformation finding. This implies that, compared to the baseline, our general transformations demonstrate superior performance when generated from a smaller set of available data. Notably, the gap is more pronounced in the web dataset, reflecting the dataset’s inherent complexities. (2) Employing simplicity-based tie-breaking enhances the coverage of both our approach and the baseline. Nonetheless, our approach benefits more from this generalization, and the improvement is more pronounced in the web dataset, particularly for smaller sample sizes. # 5.3 End-to-End Join Performance In the final set of our experiments, we evaluated the performance of our approach on an end-to-end join between two tables with unequal column values. To employ transformations for joinability, we set the repetition degree to one, which ensures a single output for each transformation applied on an input. In addition to the automated similarity-based example generation in CST (referred to as Automated Example Generation in the rest of this section), we also experimented with a small set of 5 manually prepared examples for each table, while the remaining rows were kept for testing, denoted as Manual Example Generation. Table 5 summarizes the unequal join performance in terms of Precision, Recall, and F1-Score (denoted with P, R, and F1, respectively) for our approach, CST, and another transformation-based joining approach, Auto-join4. Due to the limitations of the sampling framework and subset-based generation in Auto-join, it was only employed when examples were automatically generated and, thus, large enough to be divided into subsets. As demonstrated, regardless of the sampling method, our approach outperforms all baselines by a noticeable margin in the web dataset, yielding up to an 8% improvement in F1-Score. The diversity and complexity of the patterns in input rows in this dataset better highlight the ability of our approach to extract more general transformations and, consequently, achieve a better join performance while maintaining the explainability of the mapping space. When (a) Automated example generation (b) Manual example generation examples are automatically generated in the spreadsheet dataset, our performance is on par with the baseline. This is because the dataset is clean, the rows are consistently covered by a few rather easy textual patterns, and the example generation framework returns a sufficient number of correct examples. On the other hand, as shown in Table 5b, the performance gap exists for both datasets when examples are limited to a few human-generated ones. This gap is expected, considering the better generalization of our approach to unseen data5. Table 5: Join performance of our approach and baselines
Describing real-world entities can vary across different sources, posing a challenge when integrating or exchanging data. We study the problem of joinability under syntactic transformations, where two columns are not equi-joinable but can become equi-joinable after some transformations. Discovering those transformations is a challenge because of the large space of possible candidates, which grows with the input length and the number of rows. Our focus is on the generality of transformations, aiming to make the relevant models applicable across various instances and domains. We explore a few generalization techniques, emphasizing those that yield transformations covering a larger number of rows and are often easier to explain. Through extensive evaluation on two real-world datasets and employing diverse metrics for measuring the coverage and simplicity of the transformations, our approach demonstrates superior performance over state-of-the-art approaches by generating fewer, simpler and hence more explainable transformations as well as improving the join performance.
[ "cs.DB" ]
# 1 INTRODUCTION Modern database systems increasingly support both relational and graph queries, as reflected in the emerging SQL/PGQ standard. SQL/PGQ is part of the broader GQL initiative [8], which has been under development since 2019 by both academia and industry contributors under the ISO auspices. GQL consists of two parts: a standalone graph query language, namely GQL, and SQL/PGQ, which extends SQL with constructs for querying property graphs. SQL/PGQ allows to define property graph views over relational data and query them using a pattern-matching syntax inspired by ASCII art. These graph queries return relational results, which can then be further processed using standard SQL [4, 5]. This gives the flexibility to move between graph and relational modeling depending on the query. For example, reachability is often easier to express using graph traversal, while aggregation is more natural in SQL. Despite this flexibility, it is unclear how the choice of model affects performance. Ideally, the user’s choice, whether to use SQL or graph patterns, should be driven by clarity and convenience, not performance concerns. For example, pattern-matching queries that could be rewritten as joins should benefit from existing join optimization techniques [16, 17]. Similarly, recursive SQL queries might perform better when viewed and executed as graph traversals. In other words, the way a query is written should not significantly affect its efficiency. Since query equivalence testing is generally undecidable, the goal should be feasable translations, regardless of using SQL or SQL/PGQ. In this work, we present a short set of experiments that compare the performance of SQL and SQL/PGQ queries across several systems. Our study focuses on three questions: (1) Does the query language impact performance? (2) Can relational optimizations be reused in the graph setting? (3) Should graph views be fully materialized or kept virtual? We ran a set of experiments on several query types using DuckDB with the DuckPGQ extension [21], and ran comparative tests on Google Cloud Spanner [7] and Neo4j [15]. Our results show that performance is often tied to how the query is expressed, suggesting that the two models are not yet fully decoupled. We argue that achieving such decoupling, where the system chooses the best execution plan regardless of query syntax, is key to making hybrid querying efficient. To move toward this goal, we outline two directions for future research: (1) applying and combining proven algorithmic approaches from both relational and graph processing to obtain performance guarantees, and (2) enabling internal query rewriting, so the system can automatically translate queries between models when beneficial. Related Work. Classical work on query optimization has focused on finding efficient join orders [10–12]. Graph pattern matching has been extensively studied [1]. In sequential settings, Ullmann’s backtracking algorithm [22] has been optimized using trie indexing [20], symmetry breaking [9], and compression [2]. Cross-model query optimization is gaining popularity in both research and industry. Recent work has expanded traditional SelectProject-Join pipelines to include graph operators for more efficient neighbor lookups [13]. # 2 QUERYING PROPERTY GRAPHS IN SQL/PGQ SQL/PGQ extends SQL to support property graph querying by defining graphs as views over relational data. It uses pattern matching to find patterns in those graphs, producing relations as output. This output can be further processed with SQL. # Property Graphs as Relational Views Property graphs store data in a flexible graph format, allowing labels and key-value properties on both nodes and edges. PGQ enables to construct such graphs from relational data. Let us consider the relational schema that consists of the following relations with underlined primary keys. • Person(pid, name, city) • Account(aid, type) • Own(pid, aid) – pid is a foreign key $$ Person(pid) – aid is a foreign key $$ Account(id) Friends(pid1, pid2, since) – pid1, pid2 are foreign keys $$ Person(pid) • Transfer(tid, from, to, amount) – from, to are foreign keys $$ Account(aid) The following SQL/PGQ code constructs a property graph with Person and Account as nodes, and Friends (between people), Owns (person to account), and Transfer (between accounts) as edges. CREATE PROPERTY GRAPH social_graph VERTEX TABLES ( Person PROPERTIES (pid , name , city ) LABEL " Person ", Account PROPERTIES (aid , type ) LABEL " Account " ) EDGE TABLES ( Friend SOURCE KEY ( pid1 ) REFERENCES Person (pid ) DESTINATION KEY ( pid2 ) REFERENCES Person ( pid ) PROPERTIES ( since ) LABEL " Friend ", Owns SOURCE KEY (pid) REFERENCES Person (pid ) DESTINATION KEY (aid) REFERENCES Account ( aid ) LABEL " Owns " Transfer SOURCE KEY ( from ) REFERENCES Account (aid ) DESTINATION KEY (to) REFERENCES Account ( aid ) PROPERTIES ( amount ) LABEL " Transfer " ); # Pattern Matching On such property graph views, we can apply pattern matching which extracts relations from graphs. Graph patterns, formalized in [4, 5], are specified using ASCII-art-like syntax to describe structures in the graph. We distinguish between two types of patterns - bounded and unbounded. We refer to a pattern as bounded when it has a bounded length (e.g., a triangle of exactly three edges). Such queries can often be rewritten to SQL using standard joins, without recursion. In contrast, an unbounded pattern uses Kleene’s star, hence allowing arbitrarily long edge traversal (e.g., a path of any length), typically requiring recursion in SQL. Here, bounded Query 1 identifies triangle of friends. SELECT $\star$ FROM GRAPH_TABLE ( social_graph MATCH (x:" Person ") -[f1: Friend ]-> (y:" Person ") , (y:" Person ") -[f2: Friend ]-> (z:" Person ") , (z:" Person ") -[f3: Friend ]-> (x:" Person ") RETURN (x.name , y.name , z. name ); ); # Query 1: Bounded Friends Triangle Each line after the MATCH specifies a directed friendship edge between two Person nodes. The commas between the lines act like joins. (Full syntax and semantics are in [4].) We can also add the following lines before RETURN to verify that $x$ transferred money to $y$ and add filtering to detect suspicious money transfers: (z:" Person ") -[: Owns ]-> (az :" Account ") , (ax :" Account ") -[t1: Transfer ] $- >$ (ay :" Account ") , (ay :" Account ") -[t2: Transfer ]-> (az :" Account ") , (az :" Account ") -[t3: Transfer ]-> (ax :" Account ") WHERE x. city $\mathbf { \tau } = \mathbf { \tau }$ y. city AND x. city $! = \ z$ . city AND t1. amount > t2. amount In Query 2, we detect unbounded cycles of bank transfers with Kleene’s star. That is, the $^ \star _ { - > \star } ,$ ’ matches an unbounded sequence of edges. In particular, $x$ transfer to $y , y$ to $z$ , and there is an unbounded sequence of transfers starting from $z$ and ending in $x$ . SELECT \* FROM GRAPH_TABLE ( social_graph MATCH ${ \sf p } = { \sf \Gamma }$ ANY SHORTEST (x:" Account ") -[t1: Transfer ]-> (z:" Account ") -[t2: Transfer ]-> (y:" Account ") -[t3: Transfer ]-> \*(x:" Account ") RETURN ; ); # Query 2: Unbounded Transfers Cycle Notice that we use ANY SHORTEST to restrict to a shortest cycle that satisfies the pattern. The reason behind this is to avoid infinite loops when trying to detect the cycle. We can also add a filter: WHERE px. city <> pz. city AND t1. amount > t2. amount This query introduces two filters before the RETURN. px.city $\varsigma >$ pz.city ensures that the owners of $\mathsf { x }$ and z are from different cities. t1.amount $>$ t2.amount restricts the path to cases where the first transfer amount is greater than the second. Such constraints are useful for detecting specific patterns, such as suspicious circular transfers, by combining structural and propertybased conditions. Note that queries using the Kleene star can be rewritten as recursive SQL. However, the reverse is not true: there are queries expressible with recursive SQL that cannot be expressed in SQL/PGQ [6]. # 3 FROM SQL/PGQ TO (RECURSIVE) SQL SQL/PGQ queries that do not use Kleene’s closure (i.e., no unbounded edge traversals) can be rewritten as basic SQL without recursion. For instance, Query 1 can be expressed in SQL as follows: WITH FriendPairs AS ( SELECT pid1 AS person , pid2 AS friend FROM Friend WHERE pid1 $\ ! =$ pid2 UNION ALL SELECT pid2 AS person , pid1 AS friend FROM Friend WHERE pid1 $! =$ pid2 ) SELECT DISTINCT f.pid1 , f. pid2 FROM Friend AS f WHERE EXISTS ( SELECT 1 FROM FriendPairs fp1 JOIN FriendPairs fp2 ON fp1 . friend $\mathbf { \tau } = \mathbf { \tau }$ fp2 . friend WHERE fp1 . person $\mathbf { \Sigma } = \mathbf { \Sigma }$ f. pid1 AND fp2 . person $\mathbf { \tau } = \mathbf { \tau }$ f. pid2 ); The results in [3] indicate that core SQL/PGQ, like core GQL, can be translated into first-order logic with transitive closure. When unbounded repetition is excluded, only basic first-order logic is needed, aligning with standard SQL without recursion. As discussed in Section 5, further work is required to translate these theoretical insights into practical tools and implementations. Table 1: SQL Execution Time Divided by SQL/PGQ Time: DuckDB vs. Spanner When unbounded repetition is involved we need to use recursive SQL. Here, we limit the recursion depth to ensure we do not expand paths indefinitely, mirroring the termination condition of ANY SHORTEST. In practice, this bound guarantees that only short cycles are explored, approximating the behavior of ANY SHORTEST without risking infinite recursion. WITH RECURSIVE paths ( a_start , a_current , depth ) AS ( SELECT a_from , a_to , 1 FROM Transfer UNION ALL SELECT p. a_start , t.a_to , p. depth + 1 FROM paths p JOIN Transfer t ON p. a_current $\mathbf { \tau } = \mathbf { \tau }$ t. a_from Limit recursion depth WHERE p. depth < 2000 ) SELECT DISTINCT p. a_start AS account_in_cycle FROM paths p JOIN Transfer t ON p. a_current $\mathbf { \tau } = \mathbf { \tau }$ t. a_from WHERE t. a_to $\mathbf { \tau } = \mathbf { \tau }$ p. a_start AND p. depth $> = 2$ ORDER BY account_in_cycle ; # Query 3: Recusive Unbounded Friends Cycle Query 2 can be expressed with the recursive SQL Query 3. Translating from Recursive SQL back to SQL/PGQ is generally not possible due to higher expressiveness of the former [6]. Although a complete systematic translation does not exist, certain fragments can still be translated (see more in Section 5). # 4 EMPIRICAL ANALYSIS # Experimental Setting We ran six queries (Q1–Q6) [19] covering bounded friend-triangles patterns (Q1–Q3) and unbounded multi-hop transfer patterns (Q4– Q6) on datasets of different sizes. We compared SQL and SQL/PGQ latency across DuckDB [21], and Google Cloud Spanner [7]. We checked latency for the same queries in Neo4j’s Cypher [18]. While query syntax varied slightly between systems, all implementations were based on SQL/PGQ. Datasets Generation. The datasets of different sizes, 50, 100, and 150 transfers (rows or edges, depends on the model), were generated using Mockaroo [14]. Mockaroo samples each numeric value independently from a uniform distribution over the specified range. Query Scenarios. We designed six queries for our experiments. The first three involve bounded patterns: they identify pairs of friends connected through a common friend, with increasing levels of complexity by adding financial transactions and city-based constraints. The next three involve unbounded patterns: they detect circular money transfers of arbitrary lengths, some with additional conditions like participants living in different cities or the transfer amounts decreasing. SQL/PGQ Support Across Systems. We tested SQL/PGQ queries in three systems, DuckPGQ, Google Cloud Spanner, and Neo4j. DuckPGQ is an actively evolving extension of DuckDB that provides native SQL/PGQ support. It translates pattern-matching syntax into relational plans and utilizes in-memory data structures for graph operations. Google Cloud Spanner supports SQL/PGQ through an explicit graph schema layered on top of relational tables, enabling scalable graph querying within its distributed environment. We refer to Spanner’s property-graph functionality as “SQL/PGQ” because it adheres to two ISO standards [7]: ISO/IEC 9075-16:2023 — Information technology — Database languages SQL Property Graph Queries (SQL/PGQ), Edition 1, 2023 ISO/IEC 39075:2024 — Information technology — Database languages — GQL, Edition 1, 2024 Neo4j, a native graph database, uses Cypher and optimized data structures for fast traversal, but does not support standard SQL, so only SQL/PGQ results were evaluated. We conducted our experiments on Neo4j and Google Spanner via their respective web interfaces, meaning our queries were executed in a cloud-based environment whose exact hardware configuration was not directly under our control. For DuckPGQ, we ran all tests locally on a Windows 10 workstation with the following specifications: Intel i5-8265U $@$ 1.60GHz CPUs, 16 GB RAM. DuckPGQ was installed using the latest release available at the time. These heterogeneous execution settings introduce bias, as cloud latency and resource provisioning can fluctuate independently of the query engine. Thus, we interpret cross system results qualitatively rather than drawing conclusions from absolute runtimes. # Experimental Results SQL vs. SQL/PGQ in Spanner and DuckDB (Figure 1 and Table 1). Across bounded queries in Google Cloud Spanner, SQL queries exhibit lower latency than SQL/PGQ (Figures 1(g)-(i)), suggesting stronger optimization for traditional joins. Spanner does not currently support unbounded queries in SQL/PGQ, so we only report bounded queries. In contrast, DuckDB consistently shows improved or comparable performance for SQL/PGQ, particularly on queries involving unbounded traversals (Figures 1(a)-(f)). In addition to comparing exact runtimes, which are highly diverse, in Table 1, we measure the ratio of SQL runtime to SQL/PGQ runtime to provide a more comparative assessment. Specifically, a value greater than 1 indicates SQL/PGQ is faster. From the table we see that for DuckDB most factor differences exceed 1, revealing that SQL/PGQ is generally faster than SQL for these queries. In contrast, for Google Spanner SQL outperforms SQL/PGQ. These findings highlight that the efficiency of SQL compared to SQL/PGQ depends on internal optimizations. Neo4j (Figure 2). Since Neo4j is a native graph database that does not support SQL, we compared bounded vs. unbounded SQL/PGQbased queries, where each pair $( \mathbb { Q } i , \mathbb { Q } ( i + 3 ) )$ is identical except the latter replaces a fixed-length pattern with a Kleene-star traversal. Graph Creation Times (Table 2). In DuckDB, constructing graph views on top of relational tables is almost instantaneous. Conversely, Neo4j and Spanner report higher overhead, surpassing average query execution latencies by a significant margin. This overhead becomes noteworthy in workflows where graph views must be repeatedly initialized. Summary. Our results show that DuckDB often benefits from SQL/PGQ compared to plain SQL, regardless of whether queries are bounded or unbounded. By contrast, Google Cloud Spanner’s performance on bounded SQL queries is generally superior to its SQL/PGQ equivalents. Neo4j shows mixed performance between bounded and unbounded queries. Figure 1: Execution time $\mathbf { \Psi } ( \mathbf { m } \mathbf { s } )$ as a function of dataset size (#rows) for DuckDB (bounded & unbounded queries) and Spanner (bounded queries). Table 2: Graph Creation Latency by Dataset Size and Platform Figure 2: Comparison of execution time (ms) as a function of dataset size (#rows) for Neo4j queries (in Cypher).
SQL/PGQ is a new standard that integrates graph querying into relational systems, allowing users to freely switch between graph patterns and SQL. Our experiments show performance gaps between these models, as queries written in both formalisms can exhibit varying performance depending on the formalism used, suggesting that current approaches handle each query type separately, applying distinct optimizations to each formalism. We argue that a holistic optimization is necessary, where the system internally decides on the best algorithms regardless of whether queries are written in SQL or as graph patterns. We propose possible future research direction to unify these optimizations and mitigate performance gaps.
[ "cs.DB" ]
# 1 Introduction Incorporating non-parametric knowledge into large language models (LLMs) through additional retrieval modules has emerged as a promising approach to enhance both accuracy and the timeliness of information (Borgeaud et al., 2022; Izacard et al., 2023). This issue has led to the rapid development of various retrieval-augmented LLM paradigms designed to provide correct answers to user queries. Figure 1: Conceptual analysis of previous works and Agent-UniRAG: (a) Modular approach handles query types separately; (b) Adaptive approach uses a classifier to determine query types before executing them separately; (c) Agent-UniRAG processes all query types within a unified system using the Agent LLM concept. Accordingly, these modern paradigms address either single-hop which can respond within a single document (i.e., Naive RAG), or complex multi-hop queries, which require the integration and synthesis of information from multiple documents (i.e., Advanced RAG)(Fan et al., 2024). Nonetheless, existing modern approaches suffer from several significant limitations, including a lack of explainability and traceability. Accordingly, an emerging research issue in this regard is that current methods either inefficiently handle simple queries with unnecessary computational overhead or fail to address complex multi-step queries (Tang and Yang, 2024) (Figure 1 (a)). To address this research issue, a potential method is to add a classification module to classify the complexity of input queries for selecting the appropriate RAG model to respond (Jeong et al., 2024) (Figure 1 (b)). However, this approach is only suitable when the types of queries are predefined (in specific domains or custom benchmark datasets), which might lack flexibility and scalability in terms of various real-world applications. Recently, LLM Agent, by leveraging LLMs to execute complex tasks, emerged as a promising approach to enable the interpretability and reasoning capability for LLM (Zhao et al., 2024). Specifically, LLM is regarded as the primary controller, integrating with essential components such as planning, memory, and action execute operations necessary to complex tasks(Wang et al., 2024a). Based on this emerging conceptual technology, this study raises a research question: Can the LLM agent enable the interpretability and reasoning capability of RAG systems in a unified manner? Figure 1 (c) illustrates our proposed approach, which is designed to enhance the interpretability and effectiveness of LLMs in RAG tasks, compared with previous approaches in this research field. Specifically, we leverage the emerging concept of LLM agents, employing LLMs as central controllers to unify RAG tasks. Our unified agent is capable of handling queries that require reasoning processes (including both single-hop and multi-hop queries simultaneously) through selfguided instructions and interaction with the external knowledge base. Furthermore, most current LLM agent frameworks rely on closed-source models with very large weight sizes (e.g., GPT-4 (OpenAI, 2024)), which limits their reproducibility and controllability. Our primary focus, therefore, is on enabling trainable open-source LLM agents. In this regard, we also introduce a synthetic dataset named SynAgent-RAG to train these open-source LLM-based agents for the unified RAG system. In summary, the main contributions of this study are three-fold as follows: (i) We propose a unified RAG system using the concept of the LLM agent, which can handle queries that require reasoning processes (e.g. single-hop and multi-hop queries) by self-guided instructions and interaction with the external knowledge base to derive a response to the input queries. To the best of our knowledge, this paper is the first study to execute the unified RAG system in an end-to-end manner. (ii) We process and introduce the SynAgent-RAG dataset, which obtains 16,987 synthetic samples to enable small open-source modern LLMs (e.g., Llama-3-8B) to adapt the proposed Agent-UniRAG approach via instruction finetuning. Accordingly, this contribution is important to achieve the desired flexibility and scalability since most emerging LLM Agent technologies only work well with very large LLMs as the backbone. (iii) We evaluate the proposed approach on various RAG benchmarks, including the test set of our proposed SynAgent-RAG dataset. The experimental results show that our approach outperforms previous approaches. Furthermore, with small LLMs (e.g., Llama-3-8B) instruction-finetuned on the proposed dataset, we can achieve competitive performances compared to closed-source (e.g., GPT-4) and larger open-source agent LLMs (e.g., Llama-3- 70B). # 2 Literature Reviews # 2.1 Retrieval-Augmented LLM The evolution of RAG in the era of LLMs can be divided into three categories, including Naive RAG, Advanced RAG, and Modular RAG (Gao et al., 2023). Naive RAG and Advanced RAG are typical Retrieve-Read paradigms (Ma et al., 2023), which focus on finding the answers in a single document (i.e., single-hop queries (Ram et al., 2023)). Meanwhile, the recent emerging Modular RAG has been introduced to go beyond the two aforementioned RAG paradigms, which requires iterative accesses to both LLMs and retrievers multiple times (i.e., multi-hop queries (Trivedi et al., 2023)). Specifically, dynamically selecting the suitable strategy (i.e., single-hop or multi-hop) for unified RAG tasks become an emerging research issue in this research field (Jeong et al., 2024). # 2.2 LLM Agent Framework The concept of LLM agents involves LLM applications that can execute complex tasks, in which LLMs serve as controllers to control the flow of operations needed to complete a task or user request (Wang et al., 2024a). Accordingly, an LLM agent framework consists of the four core components such as User Request, Agent, Planning, and Memory. HuggingGPT (Shen et al., 2023) was introduced as one of the first comprehensive LLMpowered agent frameworks, which use LLMs (i.e., ChatGPT) and the ML community (i.e., Hugging Face) to process inputs from different modalities. Sequentially, Yin et al. (2023) introduces LUMOS, an agent framework for trainable open-source LLM. Specifically, the framework designs a modular architecture with a planning module to learn subgoals and a grounding module trained to translate subgoals into actions, using tools in the execution module. Inspired by previous works, in this study, we present a trainable open-source LLM-based agent framework for unified RAG tasks, which focuses on integrating the interpretable ability of LLM to determine the next action for solving RAG tasks. # 3 Methodology This section introduces the design of AgentUniRAG. Following the typical pipeline of the LLM agent framework, our framework - AgentUniRAG is put into a loop and includes four main components including Planning Module, Tool Using Module, Working Memory Module, and Reflector Module as shown in Figure 2. Figure 2: Overall design of Agent-UniRAG. # 3.1 Planning Module Leveraging the reasoning capabilities of modern LLMs, this module is designed to systematically determine the necessary actions required to address a user’s request (Input Query) at each step of the process. Specifically, the agent decides between two primary actions at each decision point: • Action: Search – This action is triggered when the agent needs to acquire additional external knowledge to progress toward solving the problem. • Action: Final Answer – This action is taken when the agent has accumulated sufficient information to confidently provide a response to the query. To implement this decision-making process, AgentUniRAG utilizes the ReAct mechanism (Yao et al., 2023), which allows the agent to iteratively reflect on and refine its execution plan. The mechanism guides the agent through a structured sequence of steps: Thought, Action, and Evidence Feedback. Continuously evaluating and integrating those steps, the agent is capable of addressing complex tasks with great precision. # 3.2 Search Tool At each stage where external knowledge is required (Action: Search), the agent interacts with the Knowledge Base through the Search Tool by formulating a search query generated by the Planning Module. The purpose of querying external knowledge is to ground the reasoning process in reliable and up-to-date information beyond the agent’s internal knowledge. This ensures that the agent’s responses are accurate and contextually relevant, especially for tasks requiring current or specialized domain knowledge. The retrieved external evidence supports the resolution of the input query, functioning as a document retrieval task. # 3.3 Reflector Module Documents retrieved from external knowledge bases often include irrelevant or extraneous information, especially when the knowledge base cannot adequately satisfy the query. Incorporating such unfiltered data into LLMs can introduce noise, degrade performance, or even mislead the model. Inspired by (Shinn et al., 2024), to mitigate this issue, we designed a module called Evidence Reflector to provide evidence feedback to LLM, which operates after the Search Tool. The Evidence Reflector filters out irrelevant content and refines the retrieved information, delivering back more focused and relevant insights to the agent. If no suitable evidence is found, it feedbacks with "No information found." This feedback is critical in guiding the model’s subsequent actions, ensuring the decision-making process remains both accurate and efficient. The agent can then better locate and leverage relevant information, thereby improving both the quality and precision of its responses. # 3.4 Working Memory Module The Working Memory module functions as a prompt memory, designed to store the input query and internal logs, including previous thoughts, actions generated by the LLM, and the extracted evidence obtained through the LLM’s interactions with tools at steps. This memory is processed by the LLM to inform and guide subsequent actions. Furthermore, the Working Memory module ensures the system’s transparency and explainability by recording the reasoning process, including the gathered knowledge and the decisions made at each step. This documentation provides insights into how conclusions were reached, enhancing trust and interpretability of the system. Figure 3: Overview of the proposed SynAgent-RAG Dataset # 3.5 Agent Loop With all the defined modules, the agent operates within a loop that can be terminated either by the agent itself or when reaching a preconfigured computing budget. In the case of the agent, the entire pipeline terminates when the planning process confirms that sufficient external evidence has been gathered to answer the input query. In other cases, a parameter $k$ is preconfigured as the agent computing budget limit to limit the agent from processing too much information, and the loop is terminated when the computing budget is exhausted. Finally, in either case, the agent triggers a ’Final answer’ action to aggregate all collected evidence and provide the final answer. # 4 SynAgent-RAG Dataset While the framework is fully compatible with bigger LLMs (e.g., GPT-4), deploying it with smaller LLMs necessitates an additional training process to maintain stability across each step. To address this challenge, we introduce SynAgent-RAG, a synthetic dataset designed for Agent-UniRAG. This is achieved through a distillation approach (Semnani et al., 2023), where GPT-4 serves as the teacher model to generate data, and smaller models (e.g., LLama 3) are distilled versions. The primary objective of SynAgent-RAG is to empower the smaller LLM agent with the capability to reason, analyze, and synthesize information drawn from an external knowledge base before delivering a well-reasoned response to complex queries. The construction of SynAgent-RAG follows the process illustrated in Figure 3. # 4.1 Dataset Construction Process # 4.1.1 Knowledge Base To construct an effective knowledge base for building the dataset that demands thoroughness, reliability, and up-to-date information across a wide range of fields, we utilized Wikipedia’s Vital Articles Level $5 ^ { 1 }$ . These articles represent a curated collection that encompasses essential topics for a comprehensive understanding of human knowledge. Prior to constructing the dataset, we carefully divided the articles into two separate sets: one for training and one for testing, to ensure a balanced evaluation of the model’s performance. # 4.1.2 Question Generation To effectively generate questions that require multiple inference steps to arrive at a final answer, it is crucial to group related passages from source articles. We hypothesize that these related passages are interconnected through hyperlinks within each Wikipedia article. For each article, we randomly select a passage from the core content of the article as the main passage $m _ { i }$ . Then from passage $m _ { i }$ , to enhance the scalability of this process, we leverage GPT-4 to determine which hyperlinks are most relevant to the content of the main passage, following the prompt template (see Figure 5). This process identifies up to 5 supporting articles with associated hyperlinks. Consequently, we obtain a set of mainsupporting passage pairs $D _ { s } = \{ ( m _ { i } , { \bf s } _ { i } ) \} _ { i = 1 } ^ { n }$ . Given the obtained set $D _ { s }$ , we construct both single and multi-hop questions $q _ { s }$ that adhere to specific criteria following previous works in the field. Single-hop questions are designed to be straightforward, and answerable solely based on the information contained within the main passage $m _ { i }$ . In contrast, multi-hop questions necessitate information from multiple passages within the pair $\{ ( m _ { i } , { \bf s } _ { i } ) \}$ , requiring several inferential steps to derive the final answer. Furthermore, when employing GPT-4 with specified prompt templates (see Figure 6 and 7) the questions and long-form reference answers generated exhibit a high level of reasoning and analysis capability. # 4.1.3 Solution Annotation The solution annotation resulting from the planning and action decision of the teacher model to solve complex tasks is the key to effectively distilling the strong reasoning capabilities of student models. In this process, we generate solution annotations for questions that include a series of steps: Thought, Action, and Evidence Feedback. Starting with the original question $q _ { i }$ , each step $t$ , GPT-4 is required to perform two tasks replicating the real-world RAG scenario when the process of retrieving external knowledge is needed: i) provide a short rationale on how to utilize the Search Tool to address the question (Thought $r _ { i } ^ { t }$ ) and formulate a detailed search query (Action $a _ { i } ^ { t }$ ) to retrieve necessary information. ii, using the search query $a _ { i } ^ { t }$ and the relevant sources $\{ ( m _ { i } , { \bf s } _ { i } ) \}$ for the question $q _ { i }$ , GPT-4 will extract the most concise information from those sources and synthesize it as Evidence Feedback $e _ { i } ^ { t }$ . The results set at step $t$ , comprising $\{ r _ { i } ^ { t } , a _ { i } ^ { t } , e _ { i } ^ { t } \}$ , are concatenated with the question and prior steps in the order of $q _ { i } , r _ { i } ^ { 1 } , a _ { i } ^ { 1 } , e _ { i } ^ { 1 } , \ldots , r _ { i } ^ { t } , a _ { i } ^ { t } , e _ { i } ^ { t }$ and used as the input for the agent to determine the plan and actions in the subsequent step $t + 1$ . The process continues until the agent concludes with a statement $" \mathrm { I }$ have the final answer" indicating sufficient evidence has been gathered. At this point, denoted as step $T$ , the final answer is also provided. Finally, the solution annotation for the question $q _ { i }$ includes thoughts $\mathbf { r } _ { i } ~ = ~ \{ r _ { i } ^ { 1 } , . . . , r _ { i } ^ { T } \}$ , search queries $\mathbf { a } _ { i } = \{ a _ { i } ^ { 1 } , \ldots , a _ { i } ^ { T - 1 } \}$ , evidence eedbacks ei = {ei1, . . . , eiT −1} and final answer. Details of prompts for the process are in Figure 8 and 9. # 4.1.4 Annotation Verification Since the data are generated by an LLM, there are instances where the entire process may fail to provide the final answer. To address this, we implement both human and technical checks to ensure the scalability and reliability of the process. Additionally, we introduce an instruction eliminator, referred to as the Verification Module, to filter out failed annotations. We observe and hypothesize that if the process can produce a final answer similar to the reference answer then the annotation quality is considered high. Using a specified prompt template, GPT-4 is tasked with generating a brief rationale and then assigning an integer score ranging from 0 to 5, indicating the degree of similarity between the predicted answer and the reference answer and the relevancy to the input query. By employing the Solution Verification Module to filter annotations, we ensure the quality of the dataset by retaining only those annotations that achieve a score of 4 or 5. # 4.2 Dataset Analysis After the annotation generation process, our dataset comprises 16,987 annotated training samples and 1,197 testing samples. Figure 4 shows the distribution of question types and indicates that our dataset largely consists of ’how’ questions confirming our initial goal of constructing a dataset to enhance the agent’s ability to reason and synthesize information. In real-world applications, RAG systems typically handle relatively simple queries. To account for this, we deliberately incorporated a higher proportion of queries requiring minimal search and fewer pieces of supporting evidence. On average, each training annotation in our dataset necessitates two supporting passages. This design ensures that our dataset reflects practical demands while accommodating varying complexities of user queries. Moreover, to the best of our knowledge, our dataset is the first dataset to integrate Chain of Thought (COT) reasoning, offering enhanced guidance for the agent to interact with external knowledge. Figure 4: Question type distribution on the training set. # 5 Experiments # 5.1 Experimental Setup # 5.1.1 Datasets and Evaluation Metrics We evaluate the unified RAG model using both single-hop and multi-hop datasets. Specifically, we employ six benchmark datasets: three singlehop (SQuAD (Rajpurkar et al., 2016), Natural Questions (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017)) and three multi-hop (MusiQue (Trivedi et al., 2022), HotpotQA (Yang et al., 2018), 2WikiMultiHopQA (Ho et al., 2020)). To compare our model against recent state-of-the-art RAG systems on these datasets, which feature short-form answers, we utilize F1, Exact Match (EM), and Accuracy (Acc) as evaluation metrics. The F1 score measures the overlap of words between the predicted and ground truth answers, EM checks for exact matches, and Acc verifies whether the predicted answer includes the ground truth. To adapt to the short-form answer setup, we use GPT-4 to extract concise answers from the detailed responses generated by the agent, as illustrated in Figure 10. With each dataset, we benchmark on 500 samples per dataset processed by (Jeong et al., 2024) and (Trivedi et al., 2023). Additionally, we evaluate performance on the SynAgent-RAG test set, comparing small opensource LLMs with larger models utilized as the backbone for the Agent-UniRAG framework. We employ ROUGE-L and BLEU metrics to assess long-form answers. ROUGE-L, based on the longest common subsequence (LCS), measures similarity, while BLEU calculates n-gram precision, incorporating a brevity penalty to account for fluency and accuracy. Given the distinct response styles of each model, a comprehensive evaluation requires assessing their ability to exhibit analytical skills and produce logically coherent long-form responses. To this end, we also use GPT-Score, an LLM-based evaluator, which prompts an LLM to compare the generated answer with the reference and input queries. GPT-Score specifically evaluates the semantic alignment between the predicted and reference answers, thereby providing a more nuanced assessment of model performance. # 5.1.2 Retrieval System and Corpus Setup For the experiments on the short-form answer datasets, to ensure a fair comparison with the methodologies employed by (Jeong et al., 2024), we utilize the BM25 retriever as the baseline retrieval model across all corpus. In addition to the BM25 baseline retrieval model, we also experiment with adding the Multilingual E5 Large model (Wang et al., 2024b) as the dense reranker after the sparse BM25 retrieval step to observe the effect of better retrieving results can lead to better agent performance. For the external corpus, we index different sources for different dataset types. Specifically, for single-hop datasets, follow (Jeong et al., 2024) we use the Wikipedia corpus preprocessed by (Karpukhin et al., 2020), while for multihop datasets, we use the corpus preprocessed by (Trivedi et al., 2023). For the experiment on the test set of our SynAgent-RAG dataset, instead of indexing documents into a corpus, we focus on measuring the model’s reasoning capability under optimal retrieval conditions. Here, we leave out the performance of retrieval systems and assume that the retrieved documents are correct and relevant to the original question by directly returning the reference documents as the results of the retrieval phase. # 5.1.3 Models In this study, we compare our approach, AgentUniRAG, against several retrieval-augmented LLM strategies, including Self-RAG (Asai et al., 2023) which adaptively retrieves passages on-demand, and generates and reflects on retrieved passages, Adaptive-RAG (Jeong et al., 2024), which dynamically adjusts retrieval based on question complexity, and IRCoT (Trivedi et al., 2023), a state-of-the-art method leveraging iterative retriever and LLM interaction through Chain-of-Thought In-Context reasoning. The baseline models in these methods utilize GPT-3.5-Turbo, known for its larger size compared to our approach, which is based on LLama3-Instruct. To further assess the effectiveness of our framework, we conducted an ablation study on multihop datasets. First, we removed the Reflector Module to assess the impact of directly utilizing the retrieved knowledge, which may include noise, as evidence feedback for the agent, whether it will lead to degradation in the performance. Second, we evaluated the effect of bypassing the gradual retrieval process by removing the Planning Module. In this scenario, the LLM was tasked with generating all necessary queries first, subsequently using the retrieved information to directly answer the input query. This setup helps understand the importance of iterative information retrieval in enhancing the agent’s decision-making accuracy. Table 1: Results on different single-hop benchmark datasets. \* are taken from Jeong et al. (2024) with GPT-3.5 as the backbone LLM for both previous approaches. Bold texts indicate the best results. Table 2: Results on different multi-hop benchmark datasets. \* are taken from Jeong et al. (2024) with GPT-3.5 as the backbone LLM for both previous approaches. Bold texts indicate the best results. # 5.1.4 Training Configurations Agent-UniRAG uses the instruction version of Meta-Llama- $\mathbf { \partial } \cdot 3 \mathbf { - } 8 \mathbf { B } ^ { 2 }$ as the backbone open-source LLM model, and fine-tune instruction on the proposed SynAgent-RAG dataset. The fine-tuning process spanned 10 hours on a single DGX node with 8 A100 GPUs, each equipped with 40GB of VRAM. The learning rate was set at $2 e ^ { - 5 }$ , and the global batch size was 256. The model was trained for 2 epochs using the AdamW optimizer (Loshchilov and Hutter, 2017). # 5.1.5 Training Prompt Template We distill Agent-UniRAG in a multi-task setting by fine-tuning three subtasks following the proposed framework to guide its planning, action decisions, and filter evidence feedback. Annotations are organized in conversational formats to facilitate interaction between components, which include: Conversation planning module annotation: As illustrated in Figure 12, we start by using the user role to provide the question $q$ in the prompt. The planning module then appends the first thought $r ^ { 1 }$ and the initial search query $a ^ { 1 }$ as the first response supervision. For subsequent turns, we act as the user and provide the extracted evidence $e ^ { t - 1 }$ of the last search query $a ^ { t - 1 }$ to the planning module. The response supervision dictates whether the planning should terminate by the thought ${ } ^ { \ast } I$ have the final answer.”; if not, the response should include a new thought $\boldsymbol { r } ^ { t }$ along with a new search query $a ^ { t }$ . Conversation final answer annotation: Instead of letting the LLM generate the final answer in the planning module as in the data generation process, we want to add more control to the pipeline by separating the task of providing the final answer to a subtask. In that way, we collect the gathered evidence $\{ e ^ { 1 } , \ldots , e ^ { T - 1 } \}$ and provide the question as the user prompt and treat the final answer as the response (depicted in Figure 14). Conversation Evidence Reflector annotation: As shown in Figure 13, we provide the search query $a ^ { t }$ and the relevant source containing the mainsupporting passages pair $\{ m , \mathbf { s } \}$ corresponding to the user turn. All the extracted evidence $e ^ { t }$ serves as the user’s prompt response. Since SynAgent-RAG annotations are conversational, we structure them as $\{ x _ { 1 } , y _ { 1 } , \dotsc , x _ { i } , y _ { i } , \dotsc , x _ { n } , y _ { n } \}$ , where $x _ { i }$ is the $i$ -th user prompt and $y _ { i }$ indicates its responses. During training, we input each entire multi-turn annotation into the model, calculating loss solely on the tokens of the responses $Y ~ = ~ \{ y _ { 1 } , . . . , y _ { i } , . . . , y _ { n } \}$ and applying binary masking on the user prompt tokens to prevent computing loss on them. The final loss function is $$ L = - \sum _ { j } \log p _ { \pi } ( t _ { j } \mid t _ { < j } ) \times 1 ( t _ { j } \in Y ) $$ where $t _ { j }$ denotes the $j$ -th input token and $1 ( \cdot )$ is a Boolean indicator function. # 5.2 Main Results We present a detailed performance comparison of the proposed approach with previous methods, as shown in Table 1 for single-hop datasets and Table 2 for multi-hop datasets. Notably, Agent-UniRAG, which leverages a small open-source LLM as its backbone, demonstrates competitive performance relative to recent state-of-the-art models that utilize significantly larger LLMs. A key strength of our model is its ability to handle diverse query types uniformly and simultaneously. In addition to that, we have three specific observations. Agent-UniRAG can effectively interact with the external knowledge base. We observe that increasing the search limits and the number of top-K retrieved documents leads to performance improvements. Specifically, with the top-K retrieval set to 12 and the integration of a dense encoder module for reranking, our proposed Agent-UniRAG substantially outperforms previous methods, achieving state-of-the-art results on the majority of benchmark RAG datasets in this research field. Additionally, in the single-hop settings, when the maximum search limit is increased from 1 to ’No limit’ we observe a further increase in performance, highlighting the LLM agent’s capability to interact with evidence feedback and then reason and refine search queries to gather better retrieval results. The importance of designed modules in the pipeline. In the multi-hop reasoning setting, with the conducted ablation studies to assess (Table 2), removing the Evidence Reflector module resulted in noticeable performance degradation, particularly in more complex datasets like MuSIQue (Trivedi et al., 2022), underscoring the critical role of the Evidence Reflector in provisioning concise and relevant evidence feedback to help the agent make better subsequent decisions. We also removed the Planning module, which serves as the central component of the pipeline. The removal of this module led to a more substantial decline in performance metrics, thereby illustrating its pivotal role in orchestrating the agent’s multi-step reasoning process and the necessity of iterative information retrieval. Table 3: Agent-UniRAG in compare with LLama-3- 70B-Inst and GPT-4-Turbo on SynAgent-RAG test set. Effectiveness of SynAgent-RAG dataset in distilling the reasoning capability. Table 3 presents the results on the test set of SynAgentRAG datasets. Upon analysis, it becomes evident that traditional metrics like Rouge-L and BLEU, focused on the lexical overlap, is insufficient for evaluating the reasoning and accuracy in longform answers. In contrast, GPT-Score, leveraging LLMs for semantic evaluation, provides a more accurate assessment. As a result, our proposed Agent-UniRAG model, which is finetuned on the SynAgent-RAG training set, demonstrates strong performance based on GPT-Score, achieving comparable results to significantly larger models such as LLaMA-70B-Inst and GPT-4. Notably, AgentUniRAG achieves this level of performance while utilizing fewer external search queries, highlighting its computational efficiency compared to more demanding computational resources typically required by larger LLMs and as an efficient solution for generating accurate long-form answers. This result also underscores the effectiveness of the SynAgent-RAG dataset in distilling reasoning capabilities from a larger LLM (GPT-4) into a more compact framework.
This paper presents a novel approach for unified retrieval-augmented generation (RAG) systems using the recent emerging large language model (LLM) agent concept. Specifically, Agent LLM, which utilizes LLM as fundamental controllers, has become a promising approach to enable the interpretability of RAG tasks, especially for complex reasoning question-answering systems (e.g., multi-hop queries). Nonetheless, previous works mainly focus on solving RAG systems with either single-hop or multi-hop approaches separately, which limits the application of those approaches to real-world applications. In this study, we propose a trainable agent framework called Agent-UniRAG for unified retrieval-augmented LLM systems, which enhances the effectiveness and interpretability of RAG systems. The main idea is to design an LLM agent framework to solve RAG tasks step-by-step based on the complexity of the inputs, simultaneously including single-hop and multi-hop queries in an end-to-end manner. Furthermore, we introduce SynAgent-RAG, a synthetic dataset to enable the proposed agent framework for small open-source LLMs (e.g., Llama-3-8B). The results show comparable performances with closed-source and larger open-source LLMs across various RAG benchmarks. Our source code and dataset are publicly available for further exploitation.
[ "cs.CL", "cs.AI", "cs.DB", "cs.IR" ]
# 1 Introduction The rapid development of autonomous driving technology has placed increasingly higher demands on the interpretability of decision-making systems. Achieving a "white-box" autonomous driving decision model, where the internal logic of the decision-making process is transparent and comprehensible to humans, has become a focal research direction in academia[1, 2, 3, 4, 5]. In recent years, with the advancement of deep learning, neural network-based decision systems have emerged as the dominant approach in autonomous driving due to their data-driven nature and ease of learning. However, such systems typically rely heavily on training data, leading to a sharp decline in performance when encountering out-of-distribution driving scenarios. Moreover, these models are often over-parameterized, making their decision logic difficult to interpret and challenging to modify or debug by domain experts when suboptimal performance occurs. Rule-based decision systems, on the other hand, hold significant value in the field of interpretable autonomous driving due to their intrinsic transparency and modifiability. By defining explicit rules to guide vehicle behavior, these models offer strong traceability and explainability. Nevertheless, traditional rule-based systems are highly dependent on expert knowledge, resulting in high development costs and limited adaptability to complex and dynamic traffic environments. Furthermore, their structural characteristics make it difficult to integrate data-driven learning methods, thereby hindering automated improvement as new scenarios emerge, and making the construction of high-complexity rule-based models prohibitively expensive. Recently, the emergence of large language models (LLMs)[6, 7, 8, 9, 10] opens up new possibilities for rule-based autonomous driving decision-making. Trained on vast corpora, LLMs exhibit extensive world knowledge and strong reasoning capabilities, enabling them to automatically generate logically coherent rule sets without human intervention. This capability offers a novel pathway toward building highly interpretable, rule-based decision models. Although some LLM-based approaches for interpretable autonomous driving decisions have been proposed[2, 3, 4, 5], they remain largely model-centric, which introduces several key limitations: Low response efficiency: Most existing methods operate by generating a single decision based on a single-frame input, resulting in slow overall response times that fail to meet the strict real-time requirements of autonomous driving systems. Additionally, this local perspective limits comprehensive scene understanding and unified modeling. Poor tactic flexibility: Even though some methods can produce interpretable decision-making processes, their underlying mechanism remains data-driven fine-tuning, which lacks high-level abstraction of driving scenarios. Consequently, such tactics struggle to generalize effectively to novel or extreme driving situations. Moreover, despite the availability of interpretable outputs, researchers still find it difficult to directly modify the driving policy model to meet diverse requirements. To address these challenges, we propose a novel framework ADRD(LLM-Driven Autonomous Driving Based on Rule-based Decision Systems) that leverages large language models to automatically generate rule-based driving policies, aiming to achieve efficient, interpretable, and robust decision-making in autonomous driving. Decision trees, as a classical rule-based model, naturally offer both interpretability and fast inference speed, making them well-suited for various driving scenarios. Their structured representation also facilitates policy transfer and modification. Therefore, in this work, we adopt decision trees as the concrete implementation of our rule-based autonomous driving system. Specifically, our framework consists of three core modules: the Planner, the Coder, and the Summarizer, forming a closed-loop agent system. First, the Planner generates high-level strategic descriptions based on the input driving scenario description, predefined rules, and vehicle state. Then, the Coder translates these descriptions into executable decision tree code. Finally, after simulation validation, failure cases are fed into the Summarizer module for iterative optimization of the tactic, which is then fed back to the Planner. Notably, we use executable code as the medium for interaction between the decision tree and the driving environment, a design choice whose advantages have been demonstrated in [11, 12, 13]. We evaluate the performance of ADRD on highway-env, a popular autonomous driving simulation platform. Extensive experimental results demonstrate that ADRD exhibits strong generalization and robustness across multiple typical driving scenarios. Compared to traditional knowledge-driven approaches and modern data-driven reinforcement learning methods, ADRD achieves significant improvements in decision-making performance, response efficiency, and interpretability, highlighting its great potential for deployment in real-world autonomous driving systems. This paper is organized as follows: Section 2 reviews related work; Section 3 presents the detailed framework design of ADRD; Section 4 reports experimental results under different driving scenarios and parameter settings, along with comparisons against baseline methods; and Section 5 concludes the paper. Our code is publicly available at https://github.com/bjbcjwsq/ADRD. # 2 Related Works Rule-Based Decision Systems: Rule-based decision systems make decisions using explicitly defined rules, which endows them with strong interpretability and modifiability. Classic rule-based decision algorithms include rule engines[14], expert systems[15], and fuzzy logic[16]. Among these, decision trees[17, 18, 19] represent a canonical approach to rule-based decision modeling. They recursively partition a dataset into smaller subsets to construct a tree-like structure used for decision-making or prediction. Each internal node represents a test on a specific feature, while each leaf node corresponds to a final class label or output value. This hierarchical structure offers high transparency and interpretability. Traditionally, the construction of decision trees has been approached through two main paradigms: knowledge-driven methods—primarily relying on human experts—and data-driven approaches, such as classical machine learning algorithms like CART[20] and ID3[21]. However, these conventional methods suffer from several limitations, including heavy dependence on expert experience, susceptibility to expert bias, and sensitivity to training data, all of which pose significant challenges when applying decision trees to modern complex systems. Recent advances in large language models (LLMs), however, have opened up new possibilities for knowledge-driven decision tree construction. For example, [12] have used large models to generate decision trees for zero-sum games in Def on_step(observation): 圈 """ InSfcoernmartiio n 品 InfVoerhmicalteion Driving Rules on Cthoeocsuerrtehnetboebsstedrrviavtiinogn.action based Twihdetrhefaoree4alcahnleasnien sto4taml.eTtehres, vsTehahetictelhgeeo lvneifhtis-clmeofistst ldarnivei,ngw atth 2o5nem/ rfYirogouhmtshtsiohdueleldepfrt osirditeizinesotveeardtaokf tnhge "ISDLMLOeEtW"a,E"ARLc"AtiNoEns_:R"ILGAHNTE"_, L"EFFATS"T,ER", II Information Module # judge mode ↓ √ ↓ def judge_mode(obs): mode $\mathbf { \Sigma } = \mathbf { \Sigma }$ judge_mode(observation) Planner Coder # get vehicles' status Prompt generator Tactics decoder Prompt generator Codes decoder Generate seugror_osutnadtu_s $\mathbf { \Sigma } = \mathbf { \Sigma }$ toubs e=rvation["ego"] System message Textual Textual System message Textual observation["others"] PasDtrpivliangs Tanadrgaedtvsice CoT proesmpotnse B TDaricvtiincgs PasTtecxotdueasl aTnadctaidcsvice CoT prompt i#f mToadceti[c" n1_:eDmecregleanrcayti_omnodMeo"d]:e LLM LLM ↑ # Tactic 2: Lane Change Decision if mode["in_lanechange_mode"]: 月 Summarizer Prompt generator # Tactic 3: Normal Driving Mode Textual Advices if mode["in_lanechange_mode"]: rTeesxpotunsale CoT prompt SysPtaesmt cmoedsesage ColliPsaisotnpRlaenpsorts LLM return Driving Scenario Agents Module Executable Codes (Rule-based Decision System) Specific Decision ADRD Trainig 一 一 一 ADRD Pipeline StarCraft, preliminarily validating the feasibility of this paradigm. Further work [13] combined LLM-generated decision trees with reinforcement learning, enabling co-evolution between RL agents and LLMs through mutual interaction. These emerging studies highlight the promising potential of using LLM-generated decision trees for interpretable decision-making. Code Generation for Decision-Making Using Large Language Models: In tasks involving the construction of decision trees using large models, the generated decision trees are typically returned as Python code blocks[12, 13]. This implies a significant demand on the coding and reasoning capabilities of large language models. Recently, large language models have made remarkable progress in code generation. For instance, Codex[22] based on GPT-3[8] and PyCodeGPT[23] based on GPT-Neo[24], have been trained on large-scale, high-quality code datasets crawled from the web, enabling them to outperform general-purpose language models on specialized code generation tasks. Additionally, several recent studies have explored the advantages of using executable code as a unified action space for LLM agents. For instance, the CodeAct[11] framework proposes using executable Python code as the primary form of decision-making and demonstrates its superiority over traditional formats such as JSON or plain text. This successful case suggests that having large models generate executable code as part of the action space offers a promising direction for decision-making in complex systems like autonomous vehicles. Interpretable Autonomous Driving Decisions: Interpretable autonomous driving decisions aim to break open the black box of decision models so that humans can understand the underlying logic of decisions. This is crucial for improving transparency, trustworthiness, and the ethical and legal analysis of autonomous systems. Before the advent of large language models, interpretable autonomous driving decisions were typically achieved through methods such as decision tree construction or imitation learning[25]. While these approaches allowed models to provide rudimentary explanations for their decisions, their explanatory power was limited by weak semantic understanding. With the emergence of large language models, detailed comprehension of driving decisions has become possible. For example, Dilu[3] designed a closed-loop decision agent composed of memory, reasoning, and reflection modules by modeling the thinking process of human drivers, achieving efficient scene understanding and driving decisions without model fine-tuning. Other works, such as DriveGPT4[5], adopted multimodal fine-tuning to allow language models to simultaneously generate both driving decisions and corresponding textual explanations. However, these works still focus primarily on individual scenarios without considering the overarching logical principles of driving. This narrow perspective limits both their response efficiency and their ability to understand macro-level features across similar driving scenarios. # 3 Methodology In this section, we present a comprehensive overview of the ADRD framework and detailed implementation of each submodule within the Agents Module. The overall architecture of ADRD, as shown in Figure 1, consists of three primary modules: Information Module, Agents Module, and Testing Module. These modules are responsible for constructing an initial driving information set, analyzing and reasoning about driving decision logic, and conducting testing and feedback, respectively. Among them, the Agents Module is the core of ADRD and is further divided into Planner, Coder, and Summarizer, which are responsible for generating preliminary driving tactics, converting these tactics into executable Python code, and improving driving tactics based on simulation failure results, respectively. Through a well-designed prompt engineering framework, ADRD can continuously refine its decision tree while maintaining safe driving and good interpretability. In the following, we employ decision trees as the concrete implementation of the rule-based decision system. # 3.1 Overview We first introduce the training process and pipeline of ADRD. For the training process of ADRD, at the beginning, the Information Module converts scenario information, vehicle information, and driving rules into natural language text understandable by LLMs. Driving environment and vehicle information comes from the vehicle’s environmental observations; in this paper, we assume it to be highly abstracted environmental features such as the number and width of lanes, the position and speed of the ego-vehicle, etc. Driving rules come from predefined traffic regulations, establishing a basic understanding of driving ethics for LLMs. A specific output example of the Information Module is shown in Figure 2, enabling the LLM to initially grasp the macroscopic understanding of the driving scenario, laying the foundation for comprehension, reasoning, and decision tree construction in the Agents Module. Information Module A specific output example EGO VEHICLE STATUS Scenario information: There are 4 lanes in total and the ego vehicle is driving in lane 3, which is the rightmost lane. Its current speed is "environment_status": { approximately $2 0 . 9 2 ~ \mathrm { m } / s$ (vx: $2 0 . 9 2 ~ \mathrm { m } / s ,$ , vy: $0 . 0 0 ~ \mathrm { m } / s \mathrm { \Omega }$ . The "lane_count": env_config["lanes_count"], vehicle's heading direction is represented by cos_h: 1.00 and "lane_width": 4 sin_h: 0.00. The speed limit ranges from $2 0 ~ \mathrm { m / s }$ to $4 0 ~ \mathrm { m / s }$ . }, Each lane is 4 meters wide. OBSERVED OTHER VEHICLES STATUS Vehicle information: VEHICLES ON THE SAME LANE There are 2 vehicles in the same lane as the ego vehicle. "ego_vehicle_status": { Vehicle 1 is ahead of the ego vehicle at a distance of 29.61 "lane_id": ..., meters. It is moving slower than the ego vehicle with a "current_speed": ..., relative speed of $0 . 2 6 ~ \mathrm { m } / s$ . Its lateral position is 0.00 meters "current_heading": ..., from the ego vehicle. "speed_range": ..., Vehicle 2 is ahead of the ego vehicle at a distance of 101.39 meters. It is moving faster than the ego vehicle with a "observed_vehicles": ... relative speed of $1 . 3 9 \mathrm { m } / \mathrm { s }$ . Its lateral position is 0.00 meters from the ego vehicle. VEHICLES ON THE LEFT LANE Driving rules: "Motor vehicles driving on the road shall not exceed the maximum speed indicated by speed limit signs...", VEHICLES ON THE RIGHT LANE There is no right lane. No vehicles detected. The refined information from the Information Module is then fed into the Agents Module. This module is the core component of ADRD, where the Planner, Coder, and Summarizer collaborate effectively. They not only ensure robust forward reasoning in the decision tree but also improve it based on failed interactions, achieving effective closed-loop iteration. Specifically, first, the Planner generates preliminary driving tactics based on textual descriptions of the driving scenario and predefined driving targets (e.g., conservative and safe driving or efficient and fast driving). Second, the Coder translates the detailed driving tactic description from the Planner into executable Python code suitable for a specific simulation environment. This code is deployed into real or simulated driving environments for validation. Finally, the Summarizer evaluates the tactics generated by the Planner and the code produced by the Coder based on collision reports from the testing module, identifies potential issues, and provides detailed improvement suggestions for both modules. By iterating through this closed-loop control process repeatedly, the Agents Module continually enhances its understanding and control capabilities in driving scenarios, resulting in increasingly better-performing decision trees. For the pipeline of ADRD, ADRD takes the driving information extracted by the Information Module from the driving environment and feeds it into the well-trained rule-based decision system to generate specific driving decisions. These decisions are then executed in the driving environment to interact with the current scene and obtain the next frame of the driving scenario. This process continues in a loop until the preset maximum driving time is reached or a hazardous driving event occurs. Next, we will detail how each of the three expert sub-modules in the Agents Module operates. # 3.2 Planner During the generation of the driving behavior decision tree, the Planner primarily assumes the roles of cognition, reasoning, and planning. Compared to directly generating executable decision tree Python code from environmental information, the Planner alleviates some cognitive and reasoning pressure by first generating human-readable textual driving tactics before producing executable code. This not only improves the explainability of the decision tree formation process but also contributes to enhancing its performance. The prompt generator for the Planner consists of three components: system prompts, driving targets, and past plans along with advice for improvement. The system prompts describe the Planner’s role within the agent group and specify a standardized output format. The driving targets define the desired driving style, such as conservative or aggressive, for which the Planner should construct the decision tree. Historical planning records and improvement suggestions are optional inputs that, when provided, serve as references to help the Planner generate more refined and robust driving tactics. By synthesizing all this information, the Planner analyzes the current driving scenario and proposes a set of driving tactics most relevant to the situation. For each tactic, the Planner defines its name, usage conditions, priority, and detailed execution, facilitating the Coder’s understanding of how to integrate these tactics into a coherent and unified decision tree. Take the tactic named "Active Lane Change Operation" as an example. The execution process of this tactic is: 1. Evaluate the distance and closing speed of the preceding vehicle. When the distance is significantly less than the safe distance and the rate of closure is rapid (usage condition): 1.a First, check whether there is sufficient space in the adjacent lane to complete the lane change operation. If the ego-vehicle is not in the leftmost lane, prioritize changing to the left lane; otherwise, consider the right lane. 1.b Second, assess the safety of the lane change to the adjacent lane, considering vehicle speeds and other potential hazards. 1.c If the target lane meets safety conditions, execute the lane change immediately. 2. If the lane change conditions are not met, fall back to deceleration operations (refer to the tactic "Conservative Deceleration") (priority). Generating human-readable, interpretable tactics offers several key advantages. First, it enables human experts to understand the intermediate process by which large language models (LLMs) generate decision trees. More importantly, it allows for direct modification of the tactic content, facilitating alignment with real-world driving requirements. Second, this approach can fully leverage the pre-trained knowledge and common-sense reasoning capabilities of LLMs, ensuring that the generated driving policies are grounded in human-like understanding and behavior. Figure 3 illustrates a comparison between the driving tactics generated by a reinforcement learning (RL) algorithm and those produced by an LLM in a particular high-speed driving scenario. Although both tactics achieve relatively high safety rewards, the RL-generated policy chooses to switch to the more hazardous fourth lane, abandoning the relatively safer third lane, which is inconsistent with normal human driving behavior. Moreover, due to the black-box nature of neural networkbased policies, it is difficult for human experts to directly adjust the learned parameters to correct such undesirable behaviors. In contrast, the LLM-generated tactic not only aligns naturally with human driving conventions—choosing to follow traffic more safely in the appropriate lane, but also remains easily interpretable and modifiable by human experts. These results demonstrate that ADRD effectively overcomes the limitations of traditional decision-making methods, particularly their lack of transparency and difficulty in manual correction. Detailed examples of prompts are available in Appendix A.1. Figure 3: Policy of PPO vs Policy of ADRD. # 3.3 Coder The main responsibility of the Coder is to generate executable Python functions for the simulation environment based on textual driving tactics generated by the Planner. The Coder relies on the strong code-writing capabilities of current LLMs, as numerous studies have demonstrated the promising applications of LLM-generated code as an action space. The Coder’s prompt generator also comprises three components: system prompts, textual driving tactic descriptions, and past code along with improvement suggestions. System prompts define the Coder’s role and standardized output format within the agent group, including detailed descriptions of variable meanings and formats in the observation space to ensure the generated code can be decoded programmatically. The textual driving tactic descriptions come from the Planner’s output, while past code and improvement suggestions originate from the Summarizer, similar to the Planner. Based on this information, the Coder generates executable Python code—in particular, as a Python function taking observations as input, which is fed into the simulation environment for interaction. This output format facilitates the integration of textual driving tactics into the simulation environment for controlling the ego-vehicle, greatly simplifying the self-iteration process of ADRD. Detailed prompt examples are provided in Appendix A.2. Figure 4 further visualizes a converged driving tactic function as a decision tree structure, making it easy for human experts to read, analyze, and identify potential logical vulnerabilities. This significantly enhances the reliability and transparency of intermediate steps in autonomous driving decisions. Notably, by incorporating different constraints into the prompts of the Planner and Coder, such as "driving style descriptions", ADRD can generate decision trees of varying complexity, a topic discussed in detail in the "Experiments" section. # 3.4 Summerizer Although the Planner and Coder have completed the initial construction of the autonomous driving decision tree, to ensure the decision tree is reliably usable in driving environments, appropriate regulatory and feedback mechanisms must be introduced to safeguard driving efficiency and safety—this is the primary function of the Summarizer. The Summarizer’s prompt generator includes three components: system prompts, collision reports, and corresponding decision tree code implementations. System prompts describe the responsibilities of the Summarizer, namely identifying whether errors reside in the driving tactic itself generated by the Planner or in the Python code implementation by the Coder. Collision reports are formatted by the Testing Module and include historical trajectories of the ego-vehicle and surrounding vehicles over the past T time steps, serving as reference data for the LLM to infer collision causes and map them to problems in driving tactics and executable codes. The driving tactics and corresponding decision tree codes were generated by the Planner and Coder in the previous iteration step, assisting the LLM in locating issues and generating recommendations for improvement in the respective modules. Designing the Summarizer as an independent unit rather than integrating it into the Planner and Coder modules offers several advantages: first, it avoids requiring both modules to reflect after every collision, saving training time costs and computational resources for LLM inference; second, it promotes specialization among different modules, effectively reducing the complexity faced by the LLM in handling individual tasks, lowering inference load, and improving LLM performance on both single and overall tasks. Among the three sub-modules in the Agents Module, the Summarizer undertakes the majority of cognitive and reasoning tasks, serving as the core component enabling ADRD’s self-iterative evolution. Detailed prompt examples are available in Appendix A.3. Figure 4: A converged decision tree generated by ADRD. To concisely illustrate the key structure of the decision tree, some specific numerical values have been omitted. For a more detailed version of the decision trees, please refer to the Experiment section. # 4 Experiments # 4.1 Settings and Baselines We conduct experiments using the highway-v0 scenario from the highway-env environment. This scenario simulates an autonomous driving environment on a highway, which we use to compare the performance of ADRD with several baseline autonomous driving decision-making methods. The baselines include PPO-CLIP[26], a representative algorithm in the field of reinforcement learning (RL), and Dilu[3], a knowledge-driven decision-making method based on large language models. Among all the methods evaluated on highway-env, DiLu demonstrates the best decision-making performance, to the best of our knowledge. In all autonomous driving policies, we adopt the discrete action space provided by highway-v0 as the set of possible driving decisions for the ego vehicle. These actions include: • Maintain current speed and lane (IDLE), • Decelerate (SLOWER), • Accelerate (FASTER), • Change lane to the left (LANE_LEFT), • Change lane to the right (LANE_RIGHT). The decision-making frequency is set to 1Hz. For speed control, there are five speed levels: $2 0 \mathrm { m / s }$ , $2 5 \mathrm { m / s }$ , $3 0 \mathrm { m / s }$ , $3 5 \mathrm { m / s }$ , and $4 0 \mathrm { m / s }$ . Executing acceleration or deceleration actions allows the ego vehicle to shift between these speed levels. If the vehicle is already at the lowest or highest speed level, further deceleration or acceleration has no effect on its speed. # PPO-CLIP We employ PPO-CLIP as a representative RL baseline. PPO-CLIP is a widely used policy gradient algorithm in deep reinforcement learning. Its core idea is to introduce a "trust region" during policy updates, limiting the divergence between the new and old policies to prevent training instability. This method offers advantages such as parameter insensitivity, ease of tuning, and high sample efficiency. The exact form of its objective function is detailed in the original paper [26], and is omitted here for brevity. In our experimental setup, the observation space includes state information from the ego vehicle and up to 7 of the nearest surrounding vehicles, including their positions, velocities, and headings. The policy network consists of a two-layer multi-layer perceptron (MLP), with each layer containing 128 neurons. The learning rate is set to $3 \times 1 0 ^ { - 4 }$ , and the model is trained for a total of 90,000 action steps. This configuration enables PPO-CLIP to serve effectively as a representative of traditional RL-based autonomous driving decision methods, allowing us to compare its performance with that of ADRD and Dilu. Both Dilu and ADRD are trained using OpenAI’s o3-mini[27] model, which is specifically designed to offer fast and cost-effective reasoning capabilities, excelling particularly in tasks related to programming, mathematics, and science. # Dilu We select Dilu as a representative autonomous driving decision method leveraging large language models. Dilu simulates the learning process of human drivers through three core modules: • Memory Module: Stores historical driving experiences, • Reasoning Module: Combines the current driving context with a subset of relevant past experiences to generate decisions, • Reflection Module: Evaluates the safety of decisions and updates the memory bank to improve future behavior. In our experiments, we configure Dilu to use the 3 most relevant memory entries for each decision-making step. That is, at every time step, Dilu refers to the 3 historical experiences most similar to the current driving situation to assist in reasoning and decision-making. # 4.2 Main Results For the three methods mentioned in Section 4.1, we evaluate their performance across three driving scenarios with varying levels of difficulty: specifically, a 4-lane environment with a vehicle density of 2.00, a 5-lane environment with a density of 2.50, and a 6-lane environment with a density of 3.00, representing normal, hard, and extreme driving conditions, respectively. The density parameter is provided internally by highway-env and reflects the number of vehicles per unit length of lane. For each method, we measure two key metrics: the average safe driving time(seconds) over 20 different randomized scenarios, and the average inference time per decision (seconds per command). These metrics capture the driving performance and decision-making efficiency of different autonomous driving approaches, respectively. It should be noted that both PPO and ADRD perform inference on two Intel(R) Xeon(R) Platinum 8374B CPUs. For PPO, the inference time corresponds to the forward pass of its policy network, while for ADRD, it depends on the complexity of the executable Python functions. DiLu, on the other hand, requires calling OpenAI’s o3-mini model for each decision, and its inference time mainly comes from the response speed of the LLM. The results of the three methods are presented in Table 1. It can be seen that, across all three driving scenarios, ADRD not only achieves the longest average safe driving times among all methods but also demonstrates the fastest inference speed. Notably, PPO performs the worst in terms of driving performance despite being trained for 90,000 driving frames, which is much more than the other two methods, highlighting the limitations of reinforcement learning-based data-driven approaches in autonomous driving decision-making. Table 1: Comparison of Average Driving Time and Control Efficiency under Different Scenario Settings # 4.3 Impact of Different Driving Styles and Driving Scenario Difficulty on Decision Tree Structure We further investigate how driving styles and scenario difficulty influence the structure of decision trees. Figure 6 compares the decision trees generated under prompt settings for conservative and aggressive driving styles, using a fixed environment configuration of 4 lanes with a vehicle density of 2.00. It can be observed that the decision tree corresponding to the conservative driving style is relatively shallow and structurally simple, whereas the one for the aggressive style exhibits greater depth and structural complexity. This is because, under the aggressive tactic, the large language model actively seeks ways to complete the driving task as efficiently and quickly as possible, which increases the likelihood of accidents. As a result, the model engages in more nuanced reasoning about the driving environment, generating a richer set of conditional nodes to handle extreme or complex driving situations. Figure 5: Decision Trees Obtained from Conservative Policies Furthermore, we analyze how varying levels of driving scenario difficulty affect the structure of the decision trees. To amplify structural differences across scenarios, we use the aggressive driving tactic and conduct experiments under configurations of 4 lanes with vehicle densities of 0.75, 1.00, and 1.25. A relatively lower vehicle density is selected to ensure that the learned policies remain largely collision-free during training. The structural parameters of the resulting decision trees are summarized in Table 2. We observe that as the difficulty of the driving scenario increases, the decision trees produced by ADRD become progressively more complex. Notably, when the vehicle density reaches 1.25, the depth of the decision tree sharply increases to 34. This indicates that as driving conditions become more challenging, the large language model must make finer distinctions among different traffic situations in order to generate decisions that are both safe and consistent with the aggressive driving target. Figure 6: Decision Trees Obtained from Aggressive Policies Table 2: Summary of decision tree characteristics under different vehicle densities.
How to construct an interpretable autonomous driving decision-making system has become a focal point in academic research. In this study, we propose a novel approach that leverages large language models (LLMs) to generate executable, rule-based decision systems to address this challenge. Specifically, harnessing the strong reasoning and programming capabilities of LLMs, we introduce the ADRD(LLM-Driven Autonomous Driving Based on Rule-based Decision Systems) framework, which integrates three core modules: the Information Module, the Agents Module, and the Testing Module. The framework operates by first aggregating contextual driving scenario information through the Information Module, then utilizing the Agents Module to generate rule-based driving tactics. These tactics are iteratively refined through continuous interaction with the Testing Module. Extensive experimental evaluations demonstrate that ADRD exhibits superior performance in autonomous driving decision tasks. Compared to traditional reinforcement learning approaches and the most advanced LLM-based methods, ADRD shows significant advantages in terms of interpretability, response speed, and driving performance. These results highlight the framework's ability to achieve comprehensive and accurate understanding of complex driving scenarios, and underscore the promising future of transparent, rule-based decision systems that are easily modifiable and broadly applicable. To the best of our knowledge, this is the first work that integrates large language models with rule-based systems for autonomous driving decision-making, and our findings validate its potential for real-world deployment.
[ "cs.AI" ]
# Introduction Using a more compact CNF encoding can make the entire difference between a combinatorial problem being solvable (even in many CPU years) and it being intractable (Subercaseaux and Heule, 2023; Heule and Scheucher, 2024; Wesley, 2024; Heule and Szeider, 2015; Schidler and Szeider, 2024; Qian et al., 2025). However, besides a few very general principles (Bj¨ork, 2011; Prestwich, 2021), it seems that the “art of encodings” is still mostly explored through problem-specific ideas, and it is not clear how to systematically obtain smaller encodings for combinatorial problems. Furthermore, lower bounds on the size of encodings have been elusive, with very few exceptions on relatively simple constraints such as Parity (Emdin et al., 2022) or At-Most-One (Kuˇcera et al., 2019), making it difficult to predict whether the direct encoding for a given problem is already optimal or not. In this article, I will show that several standard graph problems can be encoded more efficiently than through their direct formulation, and more importantly, that the tools used can shed light into theoretical questions about encodings. As a representative example, consider first the independent set problem. The input is a graph $G = ( V , E )$ , together with an integer $k$ , and the goal is to find a subset of the vertices $S \subseteq V$ such that ${ \binom { S } { 2 } } \cap E = \emptyset$ (i.e., no two vertices in $S$ are neighbors) and $| S | = k$ . The “direct encoding” is thus to create, for each vertex $\nu \in V$ , a variable $x _ { \nu }$ representing whether $\nu \in S$ . Then, the direct encoding consists of enforcing the independent-set property $$ \bigwedge _ { \{ u , \nu \} \in E } ( \overline { { x _ { u } } } \vee \overline { { x _ { \nu } } } ) , $$ and then the cardinality constraint $\begin{array} { r } { \sum _ { \nu \in V } x _ { \nu } = k } \end{array}$ . While cardinality constraints are known to admit compact encodings with $O ( n )$ clauses (Sinz, 2005), and even arc-consistency with $O ( n \lg ^ { 2 } n )$ clauses (As´ın et al., 2011), Equation (1) amounts to $\Theta ( | E | )$ clauses, which is $\Omega ( | V | ^ { 2 } )$ for dense graphs. Our first contribution (in Section 2) is to show that this encoding can be improved to $O ( | V | ^ { 2 } / \lg | V | )$ clauses, and in consequence, that several structurally similar graph problems can be encoded compactly as well: Theorem 1 (Informal). The independent set, vertex cover, $k$ -coloring, and clique problems can be encoded into CNF using $O ( | V | ^ { 2 } / \lg | V | )$ many clauses. This result improves upon an idea of Rintanen (2006), and then Ignatiev et al. (2017), who used clique coverings to encode the independent-set property by observing that, for any clique $K _ { t }$ of $G$ , at most one vertex of the clique can be part of $S$ , which can be encoded using $O ( t )$ clauses as opposed to the $\Omega ( t ^ { 2 } )$ clauses used by Equation (1). However, as the authors themselves note, this idea is not enough to obtain an encoding with $o ( | V | ^ { 2 } )$ clauses in all graphs, since for example a complete bipartite graph has $\Omega ( | V | ^ { 2 } )$ edges and yet not cliques of size larger than 2. We overcome this limitation by using biclique coverings of graphs, leveraging the fact that any graph with $\Omega ( | V | ^ { 2 } )$ edges must contain a biclique $K _ { t , t }$ with $t = \Omega ( \lg \left. V \right. )$ (Chung et al., 1983). Then, in Section 3, we compare more in detail the biclique covering framework with the clique covering framework of Ignatiev et al. (2017) as well as with Bounded Variable Addition (BVA) (Manthey et al., 2012), a successful preprocessing technique for reducing encoding sizes. As a cornerstone of this comparison, we study in Section 4 how to encode that a selection of intervals $[ i , j ]$ for $1 \leqslant i < j \leqslant n$ is pairwise disjoint, i.e., that no two intervals overlap. This corresponds to encoding the independent-set property of a corresponding interval graph ${ \mathcal { I } } _ { n }$ . We show that, despite the fact that $| E ( { \cal { I } } _ { n } ) | = \Omega ( n ^ { 4 } )$ , it is possible to obtain a much more compact encoding. Theorem 2 (Informal). The independent-set property of the interval graph ${ \mathcal { I } } _ { n }$ can be encoded into CNF using $O ( n ^ { 2 } \lg n )$ clauses. We show that this is more efficient than what is obtained via either the clique covering framework or the biclique covering framework. Moreover, we show that while BVA can obtain a more compact encoding in terms of the number of clauses (experimentally, since we do not have a theoretical guarantee), the structured encoding we present works better in practice. This is reminiscent of the idea of Haberlandt et al. (2023) for Structured Bounded Variable Addition (SBVA), which ended up winning the SAT competition 2023. # 1.1 Notation and preliminaries We will use $\top$ and $\perp$ to denote true and false, respectively. The negation of a variable $x$ is denoted $\overline { { x } }$ , and a literal is either a variable or the negation of a variable. Literals $x$ and $\overline { { x } }$ are said to be complementary, for any variable $x$ . A clause is a set of non-complementary literals, and a formula is a set of clauses (we will thus identify $\wedge$ with $\cup$ when operating over clauses, and $\boldsymbol { \ell } _ { 1 } \lor \boldsymbol { \ell } _ { 2 }$ with $\{ \ell _ { 1 } \} \cup \{ \ell _ { 2 } \}$ when operating over literals). The size of a formula is simply its number of clauses. We denote the set of variables appearing in a formula $F$ as ${ \mathsf { V a r } } ( F )$ . Given a set $\mathcal { V }$ of variables, an assignment is a function $\tau : \mathcal { V } \{ \bot , \top \}$ . For a variable $x \in \mathcal { V }$ , we say $\tau \left. = x \right.$ if $\tau ( x ) = \top$ , and similarly, $\tau \left| = { \overline { { x } } } \right.$ if $\tau ( x ) = \perp$ . For a clause $C$ , we say $\tau \left| = C \right.$ if $\forall \ell \in C \tau \left| = C \right.$ , and a for a formula $F$ , $\boldsymbol { \tau } \Vdash { F }$ if $\bigwedge _ { C \in F } \tau \left| = C \right.$ . When writing this, we assume implicitly that $\mathsf { V a r } ( F ) \subseteq \mathcal V$ For an assignment $\tau : \mathcal { V } \{ \bot , \top \}$ and a formula $F$ , now with $\mathcal { V } \subsetneq \mathsf { V a r } ( F )$ , we denote by $F | _ { \tau }$ the formula obtained by eliminating from $F$ each clause satisfied by $\tau$ , and then from each remaining clause eliminating every literal $\ell$ such that $\tau \vdash \overline { { \ell } }$ . Note that ${ \mathsf { V a r } } ( F | _ { \tau } ) = { \mathsf { V a r } } ( F ) \backslash \mathcal { V }$ . We will write ${ \mathsf { S A T } } ( F )$ to say that $\tau \left| = F \right.$ for some assignment $\tau$ , and $\mathsf { U N S A T } ( F )$ to mean that no such assignment exists. # 2 Subquadratic Encodings through Biclique Coverings # 2.1 Clique Covering Encodings Arguably, the At-Most-One constraint (AMO) for variables $x _ { 1 } , \ldots , x _ { n }$ is the most elemental example of an encoding whose direct formulation can be asymptotically improved. The naı¨ve formulation, often referred to as the pairwise encoding, is simply $$ { \mathsf { A M O } } ( x _ { 1 } , \ldots , x _ { n } ) : = \bigwedge _ { 1 \leqslant i \neq j \leqslant n } ( { \overline { { x _ { i } } } } \vee { \overline { { x _ { j } } } } ) , $$ using $\binom { n } { 2 }$ clauses, analogously to the dense case of Equation (1). On the other hand, several formulations using $O ( n )$ clauses are known (Nguyen et al., 2021; Zhou, 2020). The most compact is Chen’s product encoding, which uses $2 n + 4 \sqrt { n } + O ( \sqrt [ 4 ] { n } )$ clauses (Chen, 2010), and is essentially tight (Kuˇcera et al., 2019). We will use notation $\mathsf { A M O } _ { \mathsf { P E } } ( x _ { 1 } , \ldots , x _ { n } )$ to denote the formula resulting from Chen’s product encoding over variables $x _ { 1 } , \ldots , x _ { n }$ .1 As noted by Ignatiev et al. (2017), we can interpret this result as saying that “the independent-set property can be encoded in $O ( n )$ clauses for a complete graph $K _ { n }$ ”. Let us now formalize what this means. Definition 3 (Encoding the ISP). Given a graph $G = ( V , E )$ , a set of variables $X = \{ x _ { \nu } \mid \nu \in V \}$ , and $a$ potentially empty set of (auxiliary) variables $Y = \{ y _ { 1 } , . . . , y _ { m } \}$ , we say that a formula $F$ with $v a r ( F ) = X \cup Y$ encodes the “independent-set property” (ISP) if for every assignment $\tau : X \{ \bot , \top \}$ , The trivial observation now is that for a complete graph $K _ { n }$ , a subset $S \subseteq V ( K _ { n } )$ of its vertices is independent if and only if $| S | \leqslant 1$ , and thus $\mathsf { A M O } _ { \mathsf { P E } } ( V ( K _ { n } ) )$ encodes the ISP of $K _ { n }$ with $O ( n )$ clauses. Note that we have written $\mathsf { A M O } _ { \mathsf { P E } } ( V ( K _ { n } ) )$ , identifying each vertex $\nu$ with a corresponding variable $x _ { \nu }$ , and will do this repeatedly to avoid cluttering the notation. To extend this idea to more general graphs, Ignatiev et al. (2017) used clique coverings: a “clique covering” of a graph $G$ is a set $\{ C _ { 1 } , \ldots , C _ { m } \}$ of subgraphs of $G$ , each of which must be a clique, such that every edge $e \in E ( G )$ belongs to some subgraph $C _ { i }$ . That is, $\cup _ { i = 1 } ^ { m } E ( C _ { i } ) = E ( G )$ . We thus define a clique-covering-ISP (CC-ISP) encoding as follows: Definition 4 (CC-ISP encoding). Given a graph $G$ , and a clique covering $c$ of $G$ , the formula $$ F _ { C } : = \bigwedge _ { C \in C } A M O _ { P E } ( V ( C ) ) $$ is said to be a “CC-ISP encoding” for $G$ . Note that the number of clauses of $F _ { C }$ is $\begin{array} { r } { | F _ { C } | = \sum _ { C \in C } f ( | V ( C ) | ) } \end{array}$ , where $f ( n )$ is the number of clauses used by $\mathsf { A M O } _ { \mathsf { P E } }$ over $n$ variables, and thus $f ( n ) = O ( n )$ . Lemma 5 (Implicit in Ignatiev et al. (2017)). Any CC-ISP encoding $F _ { C }$ for a graph $G$ indeed encodes the independent-set property (see Definition 3) for $G$ . Proof. Let $\tau$ be an assignment of the variables $\{ x _ { \nu } \mid \nu \in V ( G ) \}$ , and $S _ { \tau } = \{ \nu \in V ( G ) \mid \tau ( x _ { \nu } ) = \top \}$ . Now, assume $S _ { \tau }$ is an independent set of $G$ . As the intersection of any independent set and a clique has at most one vertex, we have $| S _ { \tau } \cap V ( C ) | \leqslant 1$ for every $C \in C$ , and thus $F _ { C } | _ { \tau }$ is clearly satisfiable. Conversely, if $S _ { \tau }$ is not an independent set of $G$ , then some edge $e = \{ u , \nu \}$ of $G$ has $| S _ { \tau } \cap e | = 2$ , but by definition of clique covering, $e \in E ( C )$ for some $C \in C$ , and thus $e \subseteq V ( C )$ . But this implies $| S _ { \tau } \cap V ( C ) | \geqslant 2$ , and thus the subformula $\mathsf { A M O } _ { \mathsf { P E } } ( V ( C ) ) | _ { \tau }$ is already unsatisfiable, implying $F _ { C } | _ { \tau }$ is unsatisfiable. □ Unfortunately, this re-encoding technique is not useful for worst-case graphs. For example, in a complete bipartite graph (or biclique) $K _ { n , n }$ , then the maximum clique has size 2 (otherwise there would be a triangle, contradicting bipartiteness), and thus any clique covering $c$ of $K _ { n , n }$ covers at most one edge per clique, leading to $| C | \geqslant | E ( K _ { n , n } ) | = n ^ { 2 }$ , and thus $| F _ { C } | \geqslant n ^ { 2 }$ . Recall that $| E ( K _ { n , n } ) | = n ^ { 2 }$ is the number of clauses used by the pairwise encoding (Equation (1)), and thus no improvement is achieved. It is worth noting that clique coverings do yield an improvement for random graphs $G ( n , p )$ , using a result by Frieze and Reed (1995) (errors corrected in its arXiv version (Frieze and Reed, 2011)). Proposition 6. Let $G \sim G ( n , p )$ be a random graph on 𝑛 vertices obtained by adding each edge independently with probability $p$ . Then, with high probability there exists a clique covering $c$ for $G$ such that $\begin{array} { r } { \left| F _ { C } \right| = O \left( \frac { n ^ { 2 } } { \lg n } \right) } \end{array}$ Proof Sketch. The covering $c$ of Frieze and Reed (2011) is constructed iteratively, where the $i$ -th iteration uses roughly $p n ^ { 2 } i ^ { 2 } e ^ { 1 - i } ( \ln n ) ^ { - 2 }$ cliques of size at most $\ln n / i$ , for $1 \leqslant i \leqslant \lceil 4 \ln \ln n \rceil$ . After that, potentially $O ( n ^ { 2 } ( \lg n ) ^ { - 4 } )$ cliques of size 2 are added to deal with any uncovered edges, which can safely ignore since this is already $O ( n ^ { 2 } \lg n )$ . Thus, we have that $$ | F _ { C } | = O \left( \sum _ { C \in C } | V ( C ) | \right) = O \left( n ^ { 2 } \cdot p \cdot e \ \sum _ { i = 1 } ^ { [ 4 \ln \ln n ] } { \frac { i ^ { 2 } } { ( \ln n ) ^ { 2 } e ^ { i } } } \cdot { \frac { \ln n } { i } } \right) = O \left( { \frac { n ^ { 2 } } { \ln n } } \sum _ { i = 1 } ^ { [ 4 \ln \ln n ] } { \frac { i } { e ^ { i } } } \right) . $$ Using the standard power series $\begin{array} { r } { \sum _ { i = 1 } ^ { \infty } i \cdot x ^ { i } = \frac { x } { ( 1 - x ) ^ { 2 } } } \end{array}$ for $| x | < 1$ , we have $$ \sum _ { i = 1 } ^ { \left\lceil 4 \ln \ln n \right\rceil } \frac { i } { e ^ { i } } < \sum _ { i = 1 } ^ { \infty } i ( 1 / e ) ^ { i } = \frac { e ^ { - 1 } } { ( 1 - 1 / e ) ^ { 2 } } = O ( 1 ) , $$ so $| F _ { C } | = O ( n ^ { 2 } / \lg n )$ as desired. We will next see that by using biclique coverings instead of clique coverings we get a worst-case asymptotic improvement. # 2.2 Biclique Covering Encodings For a biclique $K _ { a , b }$ (complete bipartite graph with $a$ vertices on one part and $b$ on the other part), the independent-set property can also be encoded efficiently. Let $A$ and $B$ be the parts of $K _ { a , b }$ . Then, introduce an auxiliary variable $x _ { A }$ which intuitively represents that some vertex of $A$ belongs to the desired independent set $S$ . The desired formula is then $$ { \mathsf { B } } { \mathsf { I S } } ( K _ { a , b } ) : = \left( \bigwedge _ { \nu \in A } ( \overline { { x _ { \nu } } } \vee x _ { A } ) \right) \wedge \left( \bigwedge _ { \nu \in B } ( \overline { { x _ { A } } } \vee \overline { { x _ { \nu } } } ) \right) . $$ Intuitively, the first part of the formula is saying that if some vertex $\nu \in A$ is selected, then $x _ { A }$ will be true, and the second part enforces that $x _ { A }$ being true forbids any vertex in $B$ from being selected. Note immediately that $| \mathsf { B } | \mathsf { S } ( K _ { a , b } ) | = a + b = | V ( K _ { a , b } ) |$ , as opposed to the direct encoding which uses $| E ( K _ { a , b } ) | = a \cdot b$ . As can be observed in Figure 1 (dashed purple box), this wastes a clause when $a = b = 1$ , so we shall assume that in this case $\mathsf { B l S } ( K _ { 1 , 1 } )$ will be the direct encoding. We again lift the biclique encoding to arbitrary graphs by coverings. A biclique covering $\mathcal { B }$ for a graph $G$ is simply a set of bicliques $B _ { 1 } , \ldots , B _ { m }$ that are subgraphs of $G$ and such that $\cup _ { i = 1 } ^ { m } E ( B _ { i } ) = E ( G )$ . Proposition 7. Let $\mathcal { B } = \{ B _ { 1 } , \ldots , B _ { m } \}$ be a biclique covering of a graph $G$ , and $S \subseteq V ( G )$ some set of vertices. Then, $S$ is an independent set of $G$ if and only if for every $1 \leqslant i \leqslant m$ , $S \cap V ( B _ { i } )$ is an independent set of $B _ { i }$ . Figure 1: A biclique covering of a bipartite graph with 10 vertices and 17 edges, resulting in a formula with 15 clauses. Proof. Suppose first that $S$ is an independent set of $G$ . Then trivially $S \cap V ( B _ { i } )$ is also an independent set of $G$ for every $1 \leqslant i \leqslant m$ , and as each $B _ { i }$ is a subgraph of $G$ , the sets $S \cap V ( B _ { i } )$ are also independent sets of $B _ { i }$ . For the opposite direction, assume that $S$ is not an independent set of $G$ , and thus $| e \cap S | = 2$ for some $e \in E ( G )$ . Then, as $e \in E ( B _ { i } )$ for some $i \in \{ 1 , \ldots , m \}$ by definition of covering, we have $e \cap V ( B _ { i } ) = e$ and thus $$ | e \cap ( S \cap V ( B _ { i } ) ) | = | ( e \cap V ( B _ { i } ) ) \cap S | = | e \cap S | = 2 , $$ implying $S \cap V ( B _ { i } )$ is not independent in $B _ { i }$ . Using Proposition 7 we directly obtain the following analog to Definition 3 and Lemma 5: Proposition 8. Given a graph $G$ , and a biclique covering $\mathcal { B }$ of $G$ , the formula $\begin{array} { r } { F _ { \mathcal { B } } : = \bigwedge _ { B \in \mathcal { B } } B I S ( B ) } \end{array}$ is said to be a “BC-ISP” encoding for $G$ . The formula $F _ { \mathcal { B } }$ encodes the independent-set property of $G$ , and has size $\textstyle \sum _ { B \in { \mathcal { B } } } | V ( B ) |$ . The key difference now is made by a result stating that every graph admits a biclique covering that is asymptotically smaller than just taking its set of edges. Theorem 9 (Chung et al. (1983)). Every graph $G$ on 𝑛 vertices has a biclique covering $\mathcal { B }$ with $\begin{array} { r } { \sum _ { B \in \mathcal { B } } \left| V ( B ) \right| = } \end{array}$ $O ( n ^ { 2 } / \lg n )$ . Combining Theorem 9 and Proposition 8 we get the main result of this section. Theorem 10 (Formal version of Theorem 1). For every graph $G$ on 𝑛 vertices, there is a formula $F$ that encodes the independent-set property of $G$ such that $| F | = O ( n ^ { 2 } / \lg n )$ . Figure 1 illustrates a biclique covering encoding, showing a reduction in size from the direct encoding. By avoiding the re-encoding of $K _ { 1 , 1 }$ cliques (lime green and purple in Figure 1), the size would go down to 13. We can trivially extend Theorem 10 to other graph problems as we show next. • (Vertex Cover) Since a set of vertices $S \subseteq V ( G )$ is a vertex cover if and only if $V ( G ) \backslash S$ is an independent set, it suffices to invert the polarity of each literal $x _ { \nu }$ (resp. $\overline { { x _ { \nu _ { \nu _ { \nu } } } } }$ ) to $\overline { { x _ { \nu } } }$ (resp. $x _ { \nu _ { \ l } }$ ) in the formula $F$ from Theorem 10, resulting on a formula of the same size. • ( $k$ -Coloring) A $k$ -coloring of a graph $G$ is the same as a partition of $V ( G )$ into $k$ independent sets. Naturally, if we have variables $x _ { \nu , c }$ for $\nu \in V ( G )$ and $c \in \{ 1 , \ldots , k \}$ that indicate assigning color $c$ to vertex $\nu$ , then we simply use the conjunction of $k$ formulas obtained from Theorem 10, where the $c$ -th of them is obtained by replacing each variable $x _ { \nu }$ by $x _ { \nu , c }$ . • (Clique) It suffices to use that a set of vertices $S \subseteq V ( G )$ is a clique of $G$ if and only if $S$ is an independent set of the complement graph $\overline { { G } }$ . We conclude this section by noting that Theorem 10, and its analog to the other graph problems mentioned above can be made constructive by using a result of Mubayi and Tur´an (2010), which states that a biclique covering with the asymptotic guarantee of Theorem 9 can be computed deterministically in polynomial time. # 3 Covering Frameworks Even though the presented biclique-covering encodings are more efficient than clique-covering encodings over worst-case graphs, this is not necessarily the case on every graph. The main example being again $K _ { n }$ , for which we have the trivial clique covering $C = \{ K _ { n } \}$ , but for which every biclique covering uses at least $\left\lfloor \lg n \right\rfloor$ many bicliques Fishburn and Hammer (1996). Furthermore, the trivial clique covering results in formula with $O ( n )$ clauses, whereas any biclique covering results in $\Omega ( n \lg n )$ clauses. Proposition 11. There is a BC-ISP encoding for $K _ { n }$ using $O ( n \lg n )$ clauses, and any BC-ISP encoding for $K _ { n }$ uses $\Omega ( n \lg n )$ many clauses. Proof. For the upper bound, we construct a biclique covering $\mathcal { B }$ recursively. First, we separate $V ( K _ { n } )$ into two parts $L$ and $R$ , with $| L | = \lceil n / 2 \rceil$ and $| R | = \lfloor n / 2 \rfloor$ . Then, we add to $\mathcal { B }$ the biclique $K ( L , R )$ between the vertices in $L$ and the vertices in $R$ , and proceed recursively on both $L$ and $R$ obtaining biclique coverings $\mathcal { B } _ { L }$ and $\mathcal { B } _ { R }$ respectively. The total biclique covering of $K _ { n }$ is then given by $\mathcal { B } = K ( L , R ) \cup \mathcal { B } _ { L } \cup \mathcal { B } _ { R }$ . Thus, if we let $g ( n )$ denote the number of clauses used in an optimal BC-ISP encoding for $K _ { n }$ (i.e., one that minimizes $\begin{array} { r } { \sum _ { B \in \mathcal { B } } \vert V ( B ) \vert ) } \end{array}$ , we have $g ( n ) \leqslant n + g ( \lfloor n / 2 \rfloor ) + g ( \lceil n / 2 \rceil ) \leqslant n + 2 g ( \lceil n / 2 \rceil )$ , from where it follows that $g ( n ) = O ( n \lg n )$ . The lower bound follows from a nice result of Dong and Liu (2007), which states that in any biclique covering $\mathcal { B }$ of $K _ { n }$ , every vertex $\nu \in V ( K _ { n } )$ appears in at least $\left\lfloor \lg n \right\rfloor$ many bicliques in $\mathcal { B }$ . Indeed, using that result it suffices to say that $$ | F _ { \mathcal { B } } | = \sum _ { B \in \mathcal { B } } | V ( B ) | = \sum _ { B \in \mathcal { B } } \sum _ { \nu \in V ( K _ { n } ) } \mathbb { 1 } _ { \nu \in B } = \sum _ { \nu \in V ( K _ { n } ) } \sum _ { B \in \mathcal { B } } \mathbb { 1 } _ { \nu \in B } \geqslant \sum _ { \nu \in V ( K _ { n } ) } \lfloor \log n \rfloor = \Omega ( n \log n ) . $$ # 3.1 Bounded Variable Addition Interestingly, it is possible to do better for $K _ { n }$ by using a slight generalization of biclique coverings, as the Bounded Variable Addition (BVA) (Manthey et al., 2012) preprocessing technique does. Let us explain first how BVA operates, which requires the following definition adapted from Haberlandt et al. (2023). Definition 12 (Grid product). Given a clause $L$ , and set of clauses $\Gamma$ , the “grid product” of $L$ and Γ is the set of clauses $L \bowtie \Gamma : = \bigcup _ { \ell \in L } \bigcup _ { \gamma \in \Gamma } \gamma \cup \{ \ell \}$ . BVA works by iteratively identifying subsets of a formula $F$ that can be expressed as a grid product $L \bowtie \Gamma$ (note that this does not require $L \in F$ nor $\Gamma \subseteq F$ .), and introducing a new auxiliary variable $y$ which allows replacing $L \bowtie \Gamma$ by the clauses $\textstyle \bigwedge _ { \ell \in L } ( { \overline { { y } } } \vee \ell ) \wedge \bigwedge _ { \gamma \in \Gamma } ( y \vee \gamma )$ . Naturally, resolving on the new variable $y$ yields the replaced set of clauses, and thus the new formula is equisatisfiable to the original one. Example 13. Let $L = ( x _ { 1 } \lor x _ { 2 } )$ and $\Gamma = \{ ( p \vee q ) , ( q \vee r ) , ( \overline { { { p } } } \vee \overline { { { r } } } \vee \overline { { { q } } } ) \}$ , then the corresponding grid product is $$ \Gamma = \{ ( x _ { 1 } \vee p \vee q ) , ( x _ { 1 } \vee q \vee r ) , ( x _ { 1 } \vee \overline { { p } } \vee \overline { { r } } \vee \overline { { q } } ) , ( x _ { 2 } \vee p \vee q ) , ( x _ { 2 } \vee q \vee r ) , ( x _ { 2 } \vee \overline { { p } } \vee \overline { { r } } \vee \overline { { q } } ) , ( x _ { 2 } \wedge p \vee q ) \} $$ The new variable $y$ can be used to replace this set of clauses by the following ones: $$ ( { \overline { { y } } } \vee x _ { 1 } ) \wedge ( { \overline { { y } } } \vee x _ { 2 } ) \wedge ( y \vee p \vee q ) \wedge ( y \vee q \vee r ) \wedge ( y \vee { \overline { { p } } } \vee { \overline { { r } } } \vee { \overline { { q } } } ) . $$ Definition 14. Given a formula $F$ , and a subset of $F$ that can be expressed as a grid product $L \bowtie \Gamma$ , we say that the formula $F ^ { \prime } : = F \backslash ( L \bowtie \Gamma ) \wedge \bigwedge _ { \ell \in L } ( \overline { { y } } \vee \ell ) \wedge \bigwedge _ { \gamma \in \Gamma } ( y \vee \gamma )$ is obtainable from $F$ by $B V A$ , which we denote by $F { \xrightarrow { B V A } } F ^ { \prime }$ . If there is a sequence of formulas $F _ { 1 } , \ldots , F _ { k }$ such that $F \xrightarrow { B V A } F _ { 1 } \xrightarrow { B V A } . . . \xrightarrow { B V A } F _ { k }$ , we say that $F _ { k }$ is a potential BVA re-encoding of $F$ and write $F \overset { B V A } { \sim } \gg F _ { k }$ . Note that in the case of $\Gamma$ being a set of unit clauses, this matches Equation (2), from where we immediately have the following result: Proposition 15. For every graph $G$ on 𝑛 vertices, there is a formula $F$ such that $$ \bigwedge _ { \{ u , \nu \} \in E ( G ) } ( \overline { { x _ { u } } } \lor \overline { { x _ { \nu } } } ) \overset { B V A } { \sim } F $$ and $| F | = O ( n ^ { 2 } / \lg n )$ . An important difference between BVA and covering re-encodings is that the new variables created by BVA can also be part of the identified grid products, whereas the (bi)clique covering encodings only use the original set of variables. We will show that this difference is enough to obtain a linear encoding for the independent-set property of $K _ { n }$ , and thus showing that BVA re-encodes pairwise cardinality constraints into a linear number of clauses. While this was already observed without proof in the original BVA paper of Manthey et al. (2012), we remark that the authors have declared that their results were only empirical (Biere, 2023). We now provide a formal proof, noting that it applies to an idealized version of BVA, as opposed to its actual implementation, which is more subtle. Proposition 16. There is a formula $F$ such that $A M O ( x _ { 1 } , \ldots , x _ { n } ) \{ \overset { B V A } { \sim } F$ , and $| F | = { \mathcal { O } } ( n )$ . Proof. We prove that for $n \geqslant 3$ , there is such an $F$ with $\left. F \right. = 3 n - 6$ . The proof is by induction on $n$ . For $n = 3$ and $n = 4$ , we simply take $F : = \mathsf { A M O } ( x _ { 1 } , \ldots , x _ { n } )$ , which has size ${ \binom { 3 } { 2 } } = 3 = 3 \cdot 3 - 6$ and ${ \binom { 4 } { 2 } } = 6 = 3 \cdot 4 - 6$ respectively. For $n \geqslant 5$ , consider first the grid product $\{ x _ { 1 } , x _ { 2 } , x _ { 3 } \} \bowtie \{ x _ { 4 } , x _ { 5 } , . . . , x _ { n } \}$ , which is clearly in $\mathsf { A M O } ( x _ { 1 } , \ldots , x _ { n } )$ . The replacement of this grid product by BVA can be split into the following sets of clauses: 1. The clauses $( \overline { { x _ { 1 } } } \lor \overline { { x _ { 2 } } } )$ , $( \overline { { x _ { 1 } } } \lor \overline { { x _ { 3 } } } )$ , and $( \overline { { x _ { 2 } } } \lor \overline { { x _ { 3 } } } )$ , unaltered by the replacement. 2. The clauses $( { \overline { { y } } } \lor \ell )$ for $\ell \in \{ \overline { { x _ { 1 } } } , \overline { { x _ { 2 } } } , \overline { { x _ { 3 } } } \}$ . 3. The clauses $( y \lor \gamma )$ for $\gamma \in \{ \overline { { x _ { 4 } } } , \overline { { x _ { 5 } } } , . . . , \overline { { x _ { n } } } \}$ . 4. The clauses $( \overline { { x _ { i } } } \vee \overline { { x _ { j } } } )$ for $4 \leqslant i < j \leqslant n$ , also unaltered by the replacement. There are only 6 clauses of types (1) and (2), and for the clauses of types (3) and (4), the key observation is that they correspond exactly to the formula $\mathsf { A M O } ( \overline { { y } } , x _ { 4 } , \hdots , x _ { n } )$ , which by the induction hypothesis admits a BVA re-encoding $F ^ { \prime }$ of size $3 ( n - 2 ) - 6$ . Thus, denoting by $S$ the set of clauses of types (1) and (2), we have $$ \mathsf { A M O } ( x _ { 1 } , \hdots , x _ { n } ) \xrightarrow { \mathsf { B V A } } S \cup \mathsf { A M O } ( \overline { y } , x _ { 4 } , \hdots , x _ { n } ) \xrightarrow { \mathsf { B V A } } S \cup F ^ { \prime } , $$ and $| S \cup F ^ { \prime } | = 6 + 3 ( n - 2 ) - 6 = 3 n - 6 { \mathrm { , } }$ , as desired. # 3.2 A Lower Bound for the Independent-Set Property We now turn our attention to whether BVA, or some other re-encoding technique, could yield an asymptotic improvement over Theorem 10. While we do not manage to answer this question in full generality, we show that if we restrict ourselves to 2-CNF formulas (as the ones obtained by BVA, or the covering encodings), then we can prove a matching lower bound of $\Omega ( n ^ { 2 } / \lg n )$ , using a similar argument to Jukna (2013, Theorem 1.7). In fact our results holds for $k$ -CNF (at most $k$ literals per clause) for any fixed $k$ . Proposition 17. Fix an integer $k \geqslant 2$ . Then, for any sufficiently large $n$ , there is a graph $G _ { n }$ on 𝑛 vertices such that any $k$ -CNF formula encoding the independent-set property of $G _ { n }$ has size $\Omega ( n ^ { 2 } / \lg n )$ . Proof. There are $2 ^ { \binom { n } { 2 } }$ many distinct $n$ -vertex graphs, and we will prove first that each of them requires a different formula to encode its independent-set property. Assume, expecting a contradiction, that some formula $F$ encodes the independent-set property for two distinct $n$ -vertex graphs $G$ and $G ^ { \prime }$ . Then, take an edge $\{ u , \nu \}$ that is in $G$ but not in $G ^ { \prime }$ (without loss of generality), and consider the assignment $\tau$ defined by $\tau ( x _ { w } ) = \left\{ \begin{array} { l l } { { \sf { T } } } \\ { { \bot } } \end{array} \right.$ (J iofth𝑤erPw{is𝑢e,.𝑣}, Then, according to Definition 3, since 𝐹 encodes the independent-set property for $G$ we have that $F _ { \tau }$ is satisfiable iff $\{ u , \nu \}$ is an independent set in $G$ , which is not the case since $\{ u , \nu \} \in E ( G )$ , thus making $F _ { \tau }$ unsatisfiable. However, since $F$ also encodes the independent-set property for $G ^ { \prime }$ , we conclude that $F _ { \tau }$ is satisfiable, which is a contradiction. Now, we fix without loss of generality the set of variables for the $k$ -CNF formulas encoding the independent-set property to be $\left\{ x _ { 1 } , \ldots , x _ { n } , y _ { 1 } , \ldots , y _ { r } \right\}$ , where $r$ is the maximum number of auxiliary variables used in any of the formulas. Consider now that there are $2 ^ { r } { \binom { m } { r } }$ many possible $r$ -ary clauses from a set of $m$ variables (since each of the $k$ literals can have two polarities), and thus, the number of $k$ -CNF formulas with $t$ clauses is at most $$ \begin{array} { r } { \left( \sum _ { r = 1 } ^ { k } 2 ^ { r } { \binom { m } { r } } \right) \leqslant \left( k 2 ^ { k } { \binom { m } { k } } \right) ^ { t } \leqslant \left( k 2 ^ { k } { \binom { t } { k } } \right) ^ { t } , } \end{array} $$ where the last inequality assumes without loss of generality that the formulas do not contain pure literals (which can be simply removed), and thus the number of variables $m$ is at most the number of clauses $t$ . As we proved that each different graph needs a different formula, we have that $$ \begin{array} { c } { { \left( k 2 ^ { k } \binom { t } { k } \right) ^ { t } \geqslant 2 ^ { { \binom { n } { 2 } } } \iff t \cdot \lg \left( k 2 ^ { k } \binom { t } { k } \right) \geqslant \binom { n } { 2 } } } \\ { { \iff t \cdot \lg ( t ) \geqslant c \cdot n ^ { 2 } \iff t \geqslant c \cdot n ^ { 2 } / \lg ( t ) . } } \end{array} $$ (for some constant $c > 0$ ) But unless $t = \Omega ( n ^ { 2 } )$ , in which case we can directly conclude the result, we have that $\lg ( t ) \leqslant 2 \lg ( n )$ , and thus we conclude that $t = \Omega ( n ^ { 2 } / \lg n )$ . # 4 Disjoint Intervals We now study the independent-set property over a class of graphs that is relevant for a variety of schedulingrelated problems: interval graphs. Definition 18. For any $n \geqslant 2$ , we define the full and discrete interval graph ${ \mathcal { I } } _ { n }$ as the graph whose vertices are all the intervals $[ i , j ]$ for integers $1 \leqslant i < j \leqslant n$ , and there are edges between any two intervals that intersect. An example is illustrated in Figure 2. Naturally, the independent sets of ${ \mathcal { I } } _ { n }$ correspond to sets of intervals that do not intersect, and thus for which tasks with competing resources can be all scheduled. Note that ${ \mathcal { I } } _ { n }$ has $\Omega ( n ^ { 4 } )$ edges, as for each subset $\{ i , j , k , \ell \} \subseteq \{ 1 , \dots , n \}$ with $i < j < k < \ell$ we have an edge between vertices $[ i , k ]$ and $[ j , \ell ]$ . Therefore, a direct encoding of the independent-set property is very large even for small values of $n$ . We will prove that this can be drastically improved to $O ( n ^ { 2 } \lg n )$ , but first we analyze how well clique coverings do for this family of graphs. Proposition 19. There is a CC-ISP encoding for ${ \mathcal { I } } _ { n }$ using $O ( n ^ { 3 } )$ clauses, and no CC-ISP encoding can be asymptotically more compact. Figure 2: Illustration of the interval graph $\mathcal { T } _ { 5 }$ in two different representations, where the only two independent sets of size larger than 1 are depicted, one in red and one in cyan. Proof. First, we note that for every $k \in \{ 2 , \ldots , n - 1 \}$ , all the intervals $[ i , j ] \in V ( Z _ { n } )$ with $i \leqslant k \leqslant j$ intersect, thus forming a clique that we denote $K _ { \cap k }$ . Then, observe that the collection of cliques $K _ { \cap k }$ , for $2 \leqslant k \leqslant n - 1$ , is a clique covering of ${ \mathcal { I } } _ { n }$ . Indeed, any edge $e = ( [ i , j ] , [ a , b ] )$ must either have $a \leqslant j \leqslant b$ , in which case $e$ is covered by $K _ { \cap j }$ , or $i \leqslant a \leqslant j$ , in which case $e$ is covered by $K _ { \cap a }$ . Each clique $K _ { \cap k }$ has $O ( n ^ { 2 } )$ vertices (there are $O ( n ^ { 2 } )$ vertices in the entire ${ \mathcal { I } } _ { n }$ ), and we thus this clique covering results in $\begin{array} { r } { \sum _ { k = 2 } ^ { n - 1 } | \mathsf { A M O } _ { \mathsf { P E } } ( V ( K _ { \cap k } ) ) | = O ( n ^ { 3 } ) } \end{array}$ many clauses. For the lower bound, consider an arbitrary clique covering $c$ of ${ \mathcal { I } } _ { n }$ . Then, observe that each interval $x : = [ i , j ]$ is adjacent to all the intervals $[ i + 2 t , i + 2 t + 1 ]$ for $t \in \{ 0 , \ldots , \lfloor ( j - i - 1 ) / 2 \rfloor \}$ . Moreover, all these intervals $[ i + 2 t , i + 2 t + 1 ]$ are pairwise disjoint, which implies that for each $t$ , the edge $\{ x , [ i + 2 t , i + 2 t + 1 ] \}$ must be covered by a different clique $C _ { t } \in C$ . Thus, we have that $| \{ C \in C : x \in C \} | \geqslant \lfloor ( j - i - 1 ) / 2 \rfloor + 1 \geqslant ( j - i ) / 3$ . We can now conclude since $$ \sum _ { C \in C } | \mathsf { A M O } _ { \mathsf { P E } } ( C ) | \geqslant \sum _ { C \in C } | C | = \sum _ { x \in V ( I _ { n } ) } | \{ C \in C : x \in C \} | \geqslant \sum _ { i = 1 } ^ { n } \sum _ { j = i + 1 } ^ { n } ( j - i ) / 3 , $$ from where the change of variables $d : = j - i$ yields $$ \sum _ { i = 1 } ^ { n } \sum _ { j = i + 1 } ^ { n } ( j - i ) / 3 = \sum _ { i = 1 } ^ { n } \sum _ { d = 1 } ^ { n - i } d / 3 \geqslant \frac { 1 } { 3 } \sum _ { i = \lfloor n / 2 \rfloor } ^ { n } \sum _ { d = 1 } ^ { \lfloor n / 2 \rfloor } d = \frac { 1 } { 3 } \lceil n / 2 \rceil \cdot \frac { \lfloor n / 2 \rfloor ( \lfloor n / 2 \rfloor + 1 ) } { 2 } = \Omega ( n ^ { 3 } ) . $$ # 4.1 The Interval Propagation Trick The proof of Theorem 2, the main result of this section, is quite technical, and it is worth isolating one of its ingredients which might be of independent interest. Consider the following encoding problem: We have variables $x _ { i , j }$ , representing that an interval $[ i , j ]$ is “selected”, for $1 \leqslant i < j \leqslant n$ , and also variables $t _ { \ell }$ , for $1 \leqslant \ell \leqslant n$ , whose intended semantics are that $t _ { \ell }$ is true if and only if the index $\ell$ is contained in some selected interval. The problem is how to efficiently encode this relationship between the $x _ { i , j }$ and $t _ { \ell }$ variables, without enforcing any other conditions on either the $x$ - or $t$ -variables. For example, if $x _ { 2 , 4 }$ and $x _ { 7 , 9 }$ are the only $x$ -variables assigned to $\top$ , then $\{ t _ { 2 } , t _ { 3 } , t _ { 4 } , t _ { 7 } , t _ { 8 } , t _ { 9 } \}$ should be assigned to $\top$ , and every other $t _ { \ell }$ variable to $\perp$ . The fact that $t _ { \ell }$ implies that some interval containing $\ell$ is selected is trivial to encode, by just adding the $O ( n )$ following clauses: $$ { \overline { { t _ { \ell } } } } \vee \bigvee _ { \left[ i , j \right] \equiv \{ \ell \} } x _ { i , j } , \quad \forall 1 \leqslant \ell \leqslant n . $$ The other direction admits a nice trick. The na¨ıve way of encoding the implication from the $x$ -variables toward the $t$ -variables is to simply add clauses of the form $( \overline { { x _ { i , j } } } \vee t _ { \ell } )$ , for every $1 \leqslant i < j \leqslant n$ and every $i \leqslant \ell \leqslant j$ , which amounts to $\begin{array} { r } { \sum _ { i = 1 } ^ { n } \sum _ { j = i + 1 } ^ { n } ( j - i ) = \Omega ( n ^ { 3 } ) } \end{array}$ many clauses, by the same analysis of the sum used in the proof of Proposition 19. It turns out, however, that we can achieve this with $O ( n ^ { 2 } )$ many clauses, using what we denote the “interval propagation trick”. First, we create variables $z _ { i , j }$ for each $1 \leqslant i < j \leqslant n$ , and then add the following clauses: 1. $\overline { { x _ { i , j } } } \vee z _ { i , j }$ , for every $1 \leqslant i < j \leqslant n$ . 2. $( \overline { { \mathscr { z } _ { i , i + 1 } } } \vee t _ { i } )$ and $( \overline { { z _ { i , i + 1 } } } \lor t _ { i + 1 } )$ , for every $1 \leqslant i < n$ . 3. $( \overline { { z _ { i , j } } } \vee z _ { i + 1 , j } )$ and $( \overline { { z _ { i , j } } } \vee z _ { i , j - 1 } )$ , for every $1 \leqslant i < j - 1 < n$ . 4. $( \overline { { \mathscr { z } _ { i , j } } } \vee x _ { i , j } \vee z _ { i - 1 , j } \vee z _ { i , j + 1 } )$ , for every $1 \leqslant i < j \leqslant n$ , and removing the non-sensical literal $z _ { i - 1 , j }$ when $i = 1$ , and $z _ { i , j + 1 }$ when $j = n$ . To formalize correctness, let us denote by ${ \mathsf { N I P } } _ { n }$ the formula resulting from the aforementioned clauses in the na¨ıve encoding (i.e., of the forms $\overline { { t _ { \ell } } } \vee \bigvee _ { [ i , j ] \equiv \{ \ell \} } x _ { i , j }$ and $( \overline { { x _ { i , j } } } \lor t _ { \ell } ) _ { , } ^ { , }$ ), and $\mathsf { I P T } _ { n }$ the formula resulting from the clauses of the form $\overline { { t _ { \ell } } } \vee \bigvee _ { [ i , j ] \equiv \{ \ell \} } x _ { i , j }$ together with the clauses of types (1-4) above. Note that $| | \mathsf { P T } _ { n } | \leqslant 6 n ^ { 2 } = O ( n ^ { 2 } )$ , and let us now state the desired form of “equivalence” between these formulations. Proposition 20. Let $\tau : v a r ( N I P _ { n } ) \to \{ \bot , \top \}$ and assignment. Then, we have that $$ \tau \vert = N I P _ { n } \iff S A T ( I P T _ { n } | _ { \tau } ) , $$ and moreover, any satisfying assignment $\theta$ for $I P T _ { n } | _ { \tau }$ must assign $\theta ( z _ { a , b } ) = \top$ if and only if there is some $[ i , j ]$ such that $\tau ( x _ { i , j } ) = \top$ and $[ a , b ] \subseteq [ i , j ]$ The proof of Proposition 20 is a rather tedious induction, and thus we defer it to Appendix A. # 4.2 Better Encodings for ${ \mathcal { I } } _ { n }$ Our next result, which uses only $O ( n ^ { 2 } \lg n )$ many clauses, requires a more careful encoding, which starts by decomposing $[ 1 , n ]$ into blocks of size at most $b$ , which we do by assigning to each position $1 \leqslant i \leqslant n$ a block number $B ( i ) : = \lceil i / b \rceil$ . For now we will keep $b$ as a parameter, and denote by $\boldsymbol { k } : = \lceil n / b \rceil$ the number of blocks. We can characterize the different edges of ${ \mathcal { I } } _ { n }$ in terms of the blocks of their vertices, as the following lemma states. Lemma 21. Each edge $e : = \{ [ i _ { 1 } , j _ { 1 } ] , [ i _ { 2 } , j _ { 2 } ] \} \in E ( { \cal { I } } _ { n } )$ with $i _ { 1 } \leqslant i _ { 2 }$ must be part of exactly one of the following cases: 1. $B ( i _ { 1 } ) = B ( i _ { 2 } )$ and $B ( j _ { 1 } ) = B ( j _ { 2 } )$ , and $i _ { 2 } \leqslant j _ { 1 }$ , in which case we say 𝑒 is an $x$ -edge. 2. $B ( i _ { 1 } ) < B ( i _ { 2 } ) < B ( j _ { 1 } )$ , in which case we say $e$ is a $y$ -edge. 3. $B ( i _ { 1 } ) = B ( i _ { 2 } )$ and $i _ { 2 } \leqslant j _ { 1 }$ , but $B ( j _ { 1 } ) \neq B ( j _ { 2 } )$ , in which case we say 𝑒 is an 𝑠-edge. 4. $B ( i _ { 1 } ) < B ( i _ { 2 } ) = B ( j _ { 1 } ) = B ( j _ { 2 } )$ and $i _ { 2 } \leqslant j _ { 1 }$ , in which case we say 𝑒 is an $f$ -edge. Figure 3: Illustration of Lemma 21. The type of each edge is indicated by its label, and blocks are separated by dashed blue lines. 5. $B ( i _ { 1 } ) < B ( i _ { 2 } ) = B ( j _ { 1 } ) \neq B ( j _ { 2 } )$ , and $i _ { 2 } \leqslant j _ { 1 }$ , in which case we say 𝑒 is an 𝑚-edge. Moreover, any tuple $( i _ { 1 } , j _ { 1 } , i _ { 2 } , j _ { 2 } )$ with $i _ { 1 } \leqslant i _ { 2 }$ that satisfies one of these cases implies $\{ [ i _ { 1 } , j _ { 1 } ] , [ i _ { 2 } , j _ { 2 } ] \} \in$ $E ( { \mathcal { T } } _ { n } )$ . An illustration of Lemma 21 is provided in Figure 3, and the proof is just case analysis and thus deferred to Appendix B. Theorem 22. The independent-set property of ${ \mathcal { I } } _ { n }$ can be encoded using at most $2 6 n ^ { 2 } \lg n$ clauses. Proof. We will prove a slightly stronger statement in order to have a stronger inductive hypothesis. Let $\boldsymbol { \mathcal { I } } _ { n } ^ { 0 }$ be the graph whose vertices are all the intervals $[ i , j ]$ for integers $1 \leqslant i < j \leqslant n$ , but now with edges only between intervals whose intersection has cardina[lity ]at least 2. That is, $\{ [ 1 , 3 ] , [ 3 , 5 ] \}$ is not an edge in $\mathcal { I } _ { 5 } ^ { 0 }$ but it is in $\mathcal { I } _ { 5 }$ . The proof is by (strong) induction on $n \geqslant 2$ . The base case $n = 2$ is trivial since we can use the direct encoding then. In fact, it is easy to see that the direct encoding uses at most $\textstyle 3 { \binom { n } { 4 } } = { \frac { n ^ { 4 } } { 8 } }$ clauses, and one can computationally check that for $n \leqslant 3 2$ , $\textstyle { \frac { n ^ { 4 } } { 8 } } \leqslant 2 6 n ^ { 2 } \log n$ , and thus the direct encoding is enough for the result. We thus assume $n > 3 2$ from now on. We now focus on the inductive case, where we will generally assume that we are encoding the independent-set property for ${ \mathcal { I } } _ { n }$ , but indicate whenever a slight change is needed for $\boldsymbol { \mathcal { I } } _ { n } ^ { 0 }$ , since the two cases are almost identical. The encoding will consider each of the four types of edges from Lemma 21 separately. The base variables are $x _ { i , j }$ (for $1 \leqslant i < j \leqslant n .$ ), representing that the interval $[ i , j ]$ is part of the independent set. • ( $x$ -edges) Consider a fixed choice of $B ( i _ { 1 } ) = B ( i _ { 2 } ) = \ell$ and $B ( j _ { 1 } ) = B ( j _ { 2 } ) = r$ , and note that there will be $k ^ { 2 }$ such choices, since there are $k$ blocks in total. If $\ell = r$ , then it is easy to see that the $x$ -edges whose endpoints are in block $\ell$ form a graph isomorphic to $\mathcal { T } _ { b }$ (resp. $\begin{array} { r } { \mathcal { I } _ { b } ^ { 0 \cdot } \dag , } \end{array}$ ), and thus by inductive hypothesis they can be encoded using at most $2 6 b ^ { 2 } \log b$ clauses. If $\ell < r$ , then we consider the graph $G _ { \ell , r }$ formed by the $x$ -edges whose endpoints are in blocks $\ell$ and $r$ , even if they are both in $\ell$ or both in $r$ . This time $G _ { \ell , r }$ is isomorphic to $\mathcal { I } _ { 2 b }$ (resp. $\bar { \cal I } _ { 2 b } ^ { 0 } )$ , and thus by the inductive hypothesis all these $x$ -edges can be encoded using at most $2 6 ( 2 b ) ^ { 2 } \lg ( \hat { 2 b } ) = 1 0 4 b ^ { 2 } \lg b + 1 0 4$ clauses. As there are $k ^ { 2 }$ choices for $\ell , r$ , we can encode all the $x$ -edges of ${ \mathcal { I } } _ { n }$ using at most $k ^ { 2 } \cdot ( 1 0 4 b ^ { 2 } \lg b + 1 0 4 )$ clauses. • ( $\dot { \boldsymbol { y } }$ -edges) We create, for each pair of block-indices $1 \leqslant \ell < r \leqslant k$ , an auxiliary variable $y _ { \ell , r }$ that represents that there is some interval $[ i , j ]$ in the independent set such that $B ( i ) = \ell$ and $B ( j ) = r$ . To enforce these semantics, we add clauses $$ \begin{array} { r l } { \overline { { x _ { i , j } } } \vee y _ { B ( i ) , B ( j ) } , } & { \forall 1 \leqslant i < j \leqslant n , } \\ { \Bigg ( \overline { { y _ { \ell , r } } } \vee \bigvee _ { i , j : B ( i ) = \ell , B ( j ) = r } x _ { i , j } \Bigg ) , } & { \forall 1 \leqslant \ell < r \leqslant k . } \end{array} $$ The key observation now is that the graph $G _ { y }$ whose vertices are the $y _ { \ell , r }$ variables and has edges between variables $y _ { \ell _ { 1 } , r _ { 1 } } , y _ { \ell _ { 2 } , r _ { 2 } }$ whenever $\ell _ { 1 } < \ell _ { 2 } < r _ { 1 }$ , is isomorphic to $\varLambda _ { k } ^ { 0 }$ . Therefore, by the inductive hypothesis, we can encode all the $y$ -edges using $\textstyle { \binom { n } { 2 } } + { \binom { k } { 2 } } + 2 6 k ^ { 2 } \lg k$ many clauses. • ( $s$ -edges) We create, for each block-index $2 \leqslant r \leqslant k$ , and position $i$ such that $B ( i ) < r$ , an auxiliary variable $s _ { i , r }$ that represents that there is some interval $[ i , j ]$ in the independent set such that $B ( j ) = r$ . We encode these semantics using clauses $( \overline { { x _ { i , j } } } \lor s _ { i , B ( j ) } )$ and $\begin{array} { r } { \big ( \overline { { s _ { i , r } } } \vee \bigvee _ { j \mathrm { ~ s . t . ~ } B ( j ) = r } x _ { i , j } \big ) } \end{array}$ , which amount to at most $2 n ^ { 2 }$ clauses. Then, we add clauses There are at most $b ^ { 2 } \cdot k ^ { 3 }$ clauses from Equation (5), since we need to choose $B ( i _ { 1 } ) , r _ { 1 } , r _ { 2 }$ , for which there are $k ^ { 3 }$ possibilities, and then $b ^ { 2 }$ possibilities for $i _ { 1 } , i _ { 2 }$ such that $B ( i _ { 1 } ) = B ( i _ { 2 } )$ . Unfortunately, this is not enough to encode all $s$ -edges, since Equation (5) misses the cases where one of the two intervals is entirely contained in one block, so either $B ( i _ { 1 } ) = B ( j _ { 1 } )$ or $B ( i _ { 2 } ) = B ( j _ { 2 } )$ . The naı¨ve solution would be to add the following clauses: $$ \begin{array} { r l } & { \overline { { x _ { i _ { 1 } , j _ { 1 } } } } \vee \overline { { s _ { i _ { 2 } , r } } } , \quad \forall 1 \leqslant i _ { 1 } \leqslant i _ { 2 } \leqslant n , \forall j _ { 1 } > i _ { 1 } \mathrm { ~ s . t . ~ } B ( i _ { 1 } ) = B ( j _ { 1 } ) = B ( i _ { 2 } ) , \forall r > B ( i _ { 2 } ) , } \\ & { \overline { { x _ { i _ { 2 } , j _ { 2 } } } } \vee \overline { { s _ { i _ { 1 } , r } } } , \quad \forall 1 \leqslant i _ { 1 } \leqslant i _ { 2 } \leqslant n , \forall j _ { 2 } > i _ { 2 } \mathrm { ~ s . t . ~ } B ( i _ { 1 } ) = B ( i _ { 2 } ) = B ( j _ { 2 } ) , \forall r > B ( i _ { 1 } ) . } \end{array} $$ Equation (6) uses at most $b ^ { 3 } \cdot k ^ { 2 }$ clauses, since we need to choose $B ( i _ { 1 } ) = B ( j _ { 1 } ) = B ( i _ { 2 } )$ and $r$ , and for each such choice there are at most $b ^ { 3 }$ possibilities for $i _ { 1 } , i _ { 2 } , j _ { 1 }$ within the same block. Analogously, Equation (7) also incurs in at most $b ^ { 3 } \cdot k ^ { 2 }$ clauses. However, it will turn out that $b ^ { 3 } \cdot k ^ { 2 }$ clauses would be too large, since we will end up setting $k = \Theta ( \lg n )$ , and thus we need a slightly better way to encode Equations (6) and (7). The solution is to use the “interval propagation trick” from Section 4.1 independently in each block of index $1 \leqslant d \leqslant k$ , thanks to which we can assume variables $t _ { \ell } ^ { d }$ that represent whether some $x _ { i , j }$ with $\ell i n [ i , j ]$ is true with $B ( \ell ) = B ( i ) = B ( j ) = d$ using at most $6 \dot { b } ^ { 2 }$ clauses. In total over the $k$ blocks this incurs in at most $6 b ^ { 2 } \cdot k$ clauses. Now, wen can replace Equations (6) and (7) by $$ \overline { { t _ { \ell } ^ { B ( i ) } } } \vee \overline { { s _ { i , r } } } , \quad \forall 1 \leqslant i \leqslant \ell \leqslant n , \mathrm { ~ s . t . ~ } B ( \ell ) = B ( i ) , \forall r > B ( i ) . $$ Equation (8) only requires $k \cdot b ^ { 2 }$ many clauses, since there are at most $k$ choices for $r$ , and $b ^ { 2 }$ for $i , \ell$ . We thus cover all $s$ -edges using a total of $2 n ^ { 2 } + b ^ { 2 } k ^ { 3 } + 7 b ^ { 2 } k$ clauses. • ( $f$ -edges) This case is fully symmetrical to the $s$ -edges, this time using variables $f _ { \ell , j }$ that represent the presence of some interval $[ i , j ]$ in the independent set such that $B ( i ) = \ell$ . We add the symmetrical clauses, e.g., the symmetrical of Equation (5) is $$ \begin{array} { r } { \overline { { f _ { \ell _ { 1 } , j _ { 1 } } } } \vee \overline { { f _ { \ell _ { 2 } , j _ { 2 } } } } , \quad \forall 1 \leqslant j _ { 1 } , j _ { 2 } \leqslant n , \ \mathrm { s . t . } \ B ( j _ { 1 } ) = B ( j _ { 2 } ) , \forall \ell _ { 1 } < B ( j _ { 1 } ) , \ell _ { 2 } < B ( j _ { 1 } ) \ \mathrm { w } } \end{array} $$ and similarly with the symmetrical of Equation (8). A minor saving is that we do not need to pay any extra clauses for having the variables $t _ { i } ^ { d }$ , and therefore the combination of this case with the $s$ -edges incurs in a total of $4 n ^ { 2 } + 2 b ^ { 2 } k ^ { 3 } + 8 b ^ { 2 } k$ clauses. • (𝑚-edges) In this case we are dealing with intervals $[ i _ { 1 } , j _ { 1 } ]$ and $[ i _ { 2 } , j _ { 2 } ]$ such that $B ( j _ { 1 } ) = B ( i _ { 2 } )$ , which we call $d : = B ( j _ { 1 } )$ , and the other two endpoints not being in the block $d$ . We can cover these by using both our $s$ and $f$ variables. Indeed, it suffices to add clauses $$ \overline { { f _ { \ell , j _ { 1 } } } } \vee \overline { { s _ { i _ { 2 } , r } } } , \quad \forall 1 \leqslant i _ { 2 } \leqslant j _ { 1 } \leqslant n \mathrm { ~ s . t . ~ } B ( j _ { 1 } ) = B ( i _ { 2 } ) , \forall \ell < B ( j _ { 1 } ) , \forall r > B ( i _ { 2 } ) , $$ which amounts to at most $b ^ { 2 } k ^ { 3 }$ clauses, since we have to choose $\ell , r , B ( j _ { 1 } )$ , for which there are $k ^ { 3 }$ options, and conditioned on $B ( j _ { 1 } )$ there are at most $b ^ { 2 }$ choices for $j _ { 1 } , i _ { 2 }$ . Adding the total number of clauses over all types, we get a total of $$ k ( 1 0 4 b ^ { 2 } 1 \mathrm { g } b + 1 0 4 ) + { \binom { n } { 2 } } + { \binom { k } { 2 } } + 2 6 k ^ { 2 } 1 \mathrm { g } k + 4 n ^ { 2 } + 2 b ^ { 2 } k ^ { 3 } + 8 b ^ { 2 } k + b ^ { 2 } k ^ { 3 } $$ clauses. Using $n = k b$ , this is at most $1 0 4 n b \lg b + 1 0 4 k + 4 . 5 n ^ { 2 } + 0 . 5 k ^ { 2 } + 2 6 k ^ { 2 } \lg k + 3 n ^ { 2 } k + 8 n b$ , and then taking $k = \lfloor \lg n \rfloor$ and $b = n / \lfloor \lg n \rfloor$ , this is at most $$ 3 n ^ { 2 } \log n + n ^ { 2 } \left( 1 0 8 . 5 + { \frac { \log n + 8 . 5 \log ^ { 2 } ( n ) \log \log n } { n ^ { 2 } } } + { \frac { 8 } { \log n } } \right) \leqslant 3 n ^ { 2 } \log n + 1 0 9 . 5 n ^ { 2 } , $$ but as $n > 3 2$ , we have $\lg n > 5$ , and thus $1 0 9 . 5 n ^ { 2 } < 2 2 n ^ { 2 } \lg n$ , from where $3 n ^ { 2 } \lg n + 1 0 9 . 5 n ^ { 2 } < 2 5 n ^ { 2 } \lg n _ { \cdot }$ and thus we conclude our result. □ # 4.3 Applications to Scheduling Problems We consider the non-preemptive schedulability question treated by Mayank and Mondal (2020), where there are $N$ tasks, the $i$ -th of which has an integer duration $d _ { i }$ and must be both started and finished in the interval $[ r _ { i } , e _ { i } ]$ , with $1 \leqslant r _ { i } \leqslant e _ { i } \leqslant T$ , and moreover, there are $M$ machines which can do tasks in parallel. They present several SAT encodings, all of which use $\Omega ( N M T ^ { 2 } )$ clauses (Mayank and Mondal, 2020, Table 2). Theorem 22 allows us to do better: Theorem 23 (Informal). The non-preemptive schedulability problem can be encoded using $O ( N M T +$ $M T ^ { 2 } \lg T )$ clauses. Proof. Create variables $x _ { i , t , m }$ that represents that task $i$ is assigned to start in machine $m$ at time $t$ , and auxiliary variables $y _ { m , t _ { 1 } , t _ { 2 } }$ that represent that there is some task assigned to machine $m$ for exactly the interval $[ t _ { 1 } , t _ { 2 } ]$ . The semantics of the $y$ -variables are encoded by clauses $$ \overline { { x _ { i , t , m } } } \vee y _ { m , t , t + d _ { i } } , \quad \forall i \in [ 1 , N ] , t \in [ r _ { i } , e _ { i } ] , m \in [ 1 , M ] , $$ using $O ( N M T )$ clauses. We partition the tasks according to their duration, with $D _ { t } = \{ i : d _ { i } = t \}$ , so that we can enforce $\mathsf { A M O } _ { \mathsf { P E } } ( \{ x _ { i , t ^ { \prime } , m } : i \in D _ { t } \} )$ , for each $t ^ { \prime } \in [ 1 , T ] , t \in [ 1 , T - t ^ { \prime } ]$ and $m \in [ 1 , M ]$ , which uses $$ \sum _ { t ^ { \prime } = 1 } ^ { T } \sum _ { t \in [ 1 , T - t ^ { \prime } ] } O ( | D _ { t } | ) \cdot M \leqslant M T \cdot O \left( \sum _ { t \in [ 1 , T - t ^ { \prime } ] } | D _ { t } | \right) = O ( N M T ) $$ many clauses. Then, for each index $i$ , we enforce that task $i$ is done at some point, in some machine, with ${ \cal { O } } ( N )$ clauses: $$ \bigvee _ { \begin{array} { l } { m \in [ 1 , M ] } \\ { t \in [ r _ { i } , e _ { i } - d _ { i } ] } \end{array} } x _ { i , t , m } , \quad \forall i \in [ 1 , N ] , $$ We then use, independently for each $m$ , the encoding of Theorem 22 to encode that the variables $y _ { m , t _ { 1 } , t _ { 2 } }$ assigned to true make for disjoint intervals. This results in $O ( N M T + M T ^ { 2 } \lg T )$ clauses. Correctness follows from the facts (i) we explicitly enforce that each task is done at some point, in some machine, and respecting its time constraints, (ii) the $\mathsf { A M O } _ { \mathsf { P E } }$ constraints ensure that no two tasks are assigned to the same machine during the same time interval, and (iii) the disjoint intervals encoding on the $y _ { m , t _ { 1 } , t _ { 2 } }$ variables ensures that no machine is used for two overlapping time intervals. that by the $\mathsf { A M O } _ { \mathsf { P E } }$ constraints we are forbidding different □ When $N$ and $M$ are $\Theta ( T )$ , Theorem 23 results in ${ \cal { O } } ( T ^ { 3 } \log T )$ clauses, as opposed to the $\Omega ( T ^ { 4 } )$ clauses of Mayank and Mondal (2020). It is worth noting that, as done for example by Mari´c (2008), an alternative option would be to add for each time $t$ , and machine $m$ , a constraint $$ \mathsf { A M O } _ { \mathsf { P E } } ( \{ x _ { i , t ^ { \prime } , m } : i \in [ N ] , t ^ { \prime } \in [ t - d _ { i } , t ] \} ) , $$ however $\{ x _ { i , t ^ { \prime } , m } : i \in [ N ] , t ^ { \prime } \in [ t - d _ { i } , t ] \}$ is a set of size $\Omega ( N T )$ , and thus this would result in $\Omega ( N M T ^ { 2 } )$ clauses. # 5 Discussion and Future Work We have proposed a theoretical framework that will hopefully be helpful in the quest for understanding the limits of CNF encodings. To do so, we considered “covering encodings”, based on covering the graph of incompatible pairs of literals with either cliques or bicliques, leading to different encodings. We showed how both clique coverings and biclique coverings have different advantages, where clique coverings are more efficient in the complete graph but biclique coverings are more efficient in the worst-case (Theorem 10). This difference is essentially surmounted for random Erd˝os-Renyi graphs (Proposition 6). Moreover, it is worth noting that clique coverings are also very efficient when the graph is “very close to being complete”, meaning that every vertex has degree at least $n - \Theta ( 1 )$ , as in this case a nice result of Alon (1986) gives a covering with $O ( \lg n )$ cliques and thus an encoding with $O ( n \lg n )$ clauses. We have shown a modest lower bound in Proposition 17, which only applies to encodings using constant-width clauses. Extending this to general encodings is an interesting direction for future work. Even though our study here has been theoretical in nature, I have implemented clique-covering and biclique-covering algorithms. In particular, I tested the algorithm of Mubayi and Tura´n (2010) for biclique coverings with the guarantee of Theorem 9, however, the algorithm was designed to prove its asymptotic quality and gives poor results for small $n$ . Therefore, a natural next step is to design a more practically efficient algorithm to find good biclique coverings. For clique coverings, I tested both using the greedy approach of selecting the maximum clique at a time (for which I used the Cliquer tool (Niskanen and O¨ sterga˚rd, 2002)), and the specific clique-covering tool from Conte et al. (2016). The resulting quality of the coverings was not too different, but the latter algorithm was orders of magnitude faster for formulas with $\approx 1 0 0 0$ variables. A more thorough experimental evaluation is in order, especially considering the several more modern maximum clique algorithms, with their different trade-offs (Wu and Hao, 2015). In terms of related work to our covering ideas, besides its nearest neighbors being the works of Rintanen (2006) and Ignatiev et al. (2017), we highlight that Jukna (2013) provides an in-depth treatment of the relationship between biclique coverings and the complexity of formulas representing graphs, especially bipartite ones. Rintanen seems to be the earliest occurrence of the clique/biclique covering idea, although it is worth noting that his work explicitly states that his biclique representation still yields $\Omega ( n ^ { 2 } )$ clauses for graphs of $n$ vertices (Rintanen, 2006, Section 6). We have also studied how our framework applies to the case of complete interval graphs which are useful for encoding planning and scheduling problems. In fact, the primitive of encoding disjoint intervals seems to be useful in diverse contexts: a prior version of our encoding, that uses ${ \cal O } ( n ^ { 8 / 3 } ) = { \cal O } ( n ^ { 2 . 6 6 6 \cdots } )$ clauses, was successfully used to improve an $O ( n ^ { 4 } )$ encoding for Straight Line Programs (SLPs) from Bannai et al. (2022), where a slight variant of the disjoint-intervals property was the bottleneck, and the improved encoding led to a total of $O ( n ^ { 3 } )$ clauses, making a different constraint the bottleneck. The improved SLP encoding incorporating our ideas is currently under review (Bannai et al., 2025). To obtain $O ( n ^ { \bar { 8 } / 3 } )$ clauses, the encoding is essentially the same as the one in Theorem 22, but except of proceding recursively for the $x$ -edges and the $y$ -edges, we encode those directly. This leads to $O ( k ^ { 2 } b ^ { 4 } )$ for the $x$ -edges, since for any pair of blocks $\ell , r$ (of which there are $k ^ { 2 }$ ), we forbid the $x$ -edges between intervals $[ i , j ]$ and $[ i ^ { \prime } , j ^ { \prime } ]$ such that $B ( i ) = B ( i ^ { \prime } ) = \ell$ and $B ( j ) = B ( j ^ { \prime } ) = r$ , and there are at most $b ^ { 4 }$ choices for $i , j , i ^ { \prime } , j ^ { \prime }$ conditioned on their blocks being $\ell$ and $r$ . For the $y$ -edges, this leads to ${ \mathcal { O } } ( k ^ { 4 } )$ clauses, since we need to forbid the $y$ -edges between pairs of blocks that guarantee an interval overlap. Therefore, the total number of clauses is $O ( k ^ { 2 } b ^ { 4 } + k ^ { 4 } + n ^ { 2 } + k ^ { 3 } b ^ { 2 } )$ , for which a simple calculus argument shows that the optimal choice is $k = n ^ { 2 / 3 }$ and $b = n ^ { 1 / 3 }$ , leading to the number of clauses being $$ O ( n ^ { 4 / 3 } n ^ { 4 / 3 } + n ^ { 8 / 3 } + n ^ { 2 } + n ^ { 6 / 3 } n ^ { 2 / 3 } ) = O ( n ^ { 8 / 3 } + n ^ { 8 / 3 } + n ^ { 2 } + n ^ { 8 / 3 } ) = O ( n ^ { 8 / 3 } ) . $$ The difference between the $O ( n ^ { 8 / 3 } )$ encoding and the result in Theorem 22 is the use of recursion. Indeed, the $O ( n ^ { 8 / 3 } )$ encoding can be formulated as a carefully crafted biclique covering, making for another example of the difference between BVA-style encodings, which allow for covering auxiliary variables, and encodings that only cover the base variables. Running BVA on top of the $O ( n ^ { 8 / 3 } )$ encoding led to smaller encodings that either of them alone, reenforcing the idea that BVA can operate recursively on top of an initial covering (see Table 1 in Appendix C). Experimental results for the SLP problem of Bannai et al. (2022) are presented in Table 2 (Appendix C). The recursion of blocks in Theorem 22 has a nice interpretation for scheduling: if one were to schedule events on e.g., a calendar year, starting on day $d _ { 1 }$ and ending on day $d _ { 2 }$ , it would be convenient to first catalogue the events based on their starting and ending month: anything starting in January and ending in May is incompatible with anything starting in March and ending in September. Then, for more granularity, the same technique decomposes months into weeks, and so on. The concrete decomposition in Theorem 22 used $\lg n$ blocks in the first recursive step, which would be $\lg 3 6 5 \approx 8 . 5$ blocks for a calendar year, on the order of magnitude of months. On the other hand, the decomposition for the $O ( n ^ { 8 / 3 } )$ version, where only one level of recursion is used, takes $k = n ^ { 2 / 3 }$ , which would be roughly $3 6 5 ^ { 2 / 3 } \approx 5 1$ for a calendar year, so basically the number of weeks. Studying the practical applicability of our encoding for scheduling problems is an interesting direction for future work. In general, it is not necessarily true that fewer clauses will lead to a reduced solving time (Bjo¨rk, 2011; Haberlandt et al., 2023). For example, together with Marijn Heule, we presented an $O ( n ^ { 2 } k \lg k )$ encoding for the packing chromatic number of $\mathbb { Z } ^ { 2 }$ (Subercaseaux and Heule, 2022), which ended up being surpassed by a much more effective although asymptotically larger encoding Subercaseaux and Heule (2023). Finally, we note that our ideas can be readily applied to Integer Linear Programming formulations, and hopefully to other forms of constraint programming too. # References Noga Alon. Covering graphs by the minimum number of equivalence relations. Combinatorica, 6(3):201–206, September 1986. ISSN 1439-6912. doi: 10.1007/BF02579381. [Cited on Section 5] Roberto Ası´n, Robert Nieuwenhuis, Albert Oliveras, and Enric Rodrı´guez-Carbonell. Cardinality Networks: A theoretical and empirical study. Constraints, 16(2):195–221, April 2011. ISSN 1572-9354. doi: 10.1007/s10601-010-9105-0. [Cited on Section 1] Hideo Bannai, Keisuke Goto, Masakazu Ishihata, Shunsuke Kanda, Dominik Ko¨ppl, and Takaaki Nishimoto. Computing NP-Hard Repetitiveness Measures via MAX-SAT. In Shiri Chechik, Gonzalo Navarro, Eva Rotenberg, and Grzegorz Herman, editors, 30th Annual European Symposium on Algorithms (ESA 2022), volume 244 of Leibniz International Proceedings in Informatics (LIPIcs), pages 12:1–12:16, Dagstuhl, Germany, 2022. Schloss Dagstuhl – Leibniz-Zentrum f¨ur Informatik. ISBN 978-3-95977-247-1. doi: 10.4230/LIPIcs.ESA.2022.12. URL https://drops.dagstuhl.de/entities/document/10.4230/ LIPIcs.ESA.2022.12. [Cited on Sections (document), 5, and C] Hideo Bannai, Keisuke Goto, Masakazu Ishihata, Shunsuke Kanda, Dominik Ko¨ppl, Takaaki Nishimoto, and Bernardo Subercaseaux. Computing NP-Hard Repetitiveness Measures via MAX-SAT (under review), 2025. [Cited on Section 5] Armin Biere. exactly one clauses $\cdot \cdot$ Issue #39 $\cdot \cdot$ arminbiere/kissat — github.com. https://github.com/ arminbiere/kissat/issues/39#issuecomment-1686043817, 2023. [Accessed 06-06-2025]. [Cited on Section 3.1] Magnus Bjo¨rk. Successful SAT encoding techniques. J. Satisf. Boolean Model. Comput., 7(4):189–201, 2011. doi: 10.3233/SAT190085. URL https://doi.org/10.3233/sat190085. [Cited on Sections 1 and 5] Jingchao Chen. A new sat encoding of the at-most-one constraint. Proc. of the Tenth Int. Workshop of Constraint Modelling and Reformulation., page 8, 2010. [Cited on Section 2.1] F. R. K. Chung, P. Erdo˝s, and J. Spencer. On the decomposition of graphs into complete bipartite subgraphs. In Paul Erd˝os, L´aszl´o Alp´ar, G´abor Hal´asz, and Andr´as S´ark¨ozy, editors, Studies in Pure Mathematics: To the Memory of Paul Tur´an, pages 95–101. Birkh¨auser, Basel, 1983. ISBN 978-3-0348-5438-2. doi: 10.1007/978-3-0348-5438-2 10. [Cited on Sections (document), 1, and 9] Alessio Conte, Roberto Grossi, and Andrea Marino. Clique covering of large real-world networks. In Proceedings of the 31st Annual ACM Symposium on Applied Computing, SAC ’16, page 1134–1139, New York, NY, USA, 2016. Association for Computing Machinery. ISBN 9781450337397. doi: 10.1145/2851613.2851816. URL https://doi.org/10.1145/2851613.2851816. [Cited on Section 5] Jinquan Dong and Yanpei Liu. On the Decomposition of Graphs into Complete Bipartite Graphs. Graphs and Combinatorics, 23(3):255–262, June 2007. ISSN 1435-5914. doi: 10.1007/s00373-007-0722-3. [Cited on Section 3] Gregory Emdin, Alexander S. Kulikov, Ivan Mihajlin, and Nikita Slezkin. CNF Encodings of Parity. In Stefan Szeider, Robert Ganian, and Alexandra Silva, editors, 47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022), volume 241 of Leibniz International Proceedings in Informatics (LIPIcs), pages 47:1–47:12, Dagstuhl, Germany, 2022. Schloss Dagstuhl – Leibniz-Zentrum fu¨r Informatik. ISBN 978-3-95977-256-3. doi: 10.4230/LIPIcs.MFCS.2022.47. [Cited on Section 1] Peter C. Fishburn and Peter L. Hammer. Bipartite dimensions and bipartite degrees of graphs. Discrete Mathematics, 160(1):127–148, November 1996. ISSN 0012-365X. doi: 10.1016/0012-365X(95)00154-O. [Cited on Section 3] Alan Frieze and Bruce Reed. Covering the edges of a random graph by cliques. Combinatorica, 15(4): 489–497, December 1995. ISSN 1439-6912. doi: 10.1007/BF01192522. [Cited on Section 2.1] Alan Frieze and Bruce Reed. Covering the edges of a random graph by cliques, 2011. URL https: //arxiv.org/abs/1103.4870. [Cited on Sections 2.1 and 2.1] Andrew Haberlandt, Harrison Green, and Marijn J. H. Heule. Effective Auxiliary Variables via Structured Reencoding. In Meena Mahajan and Friedrich Slivovsky, editors, 26th International Conference on Theory and Applications of Satisfiability Testing (SAT 2023), volume 271 of Leibniz International Proceedings in Informatics (LIPIcs), pages 11:1–11:19, Dagstuhl, Germany, 2023. Schloss Dagstuhl – Leibniz-Zentrum fu¨r Informatik. ISBN 978-3-95977-286-0. doi: 10.4230/LIPIcs.SAT.2023.11. [Cited on Sections 1, 3.1, and 5] Marijn J. H. Heule and Manfred Scheucher. Happy Ending: An Empty Hexagon in Every Set of 30 Points. In Bernd Finkbeiner and Laura Kov´acs, editors, Tools and Algorithms for the Construction and Analysis of Systems, pages 61–80, Cham, 2024. Springer Nature Switzerland. ISBN 978-3-031-57246-3. doi: 10.1007/978-3-031-57246-3 5. [Cited on Section 1] Marijn J. H. Heule and Stefan Szeider. A SAT Approach to Clique-Width. ACM Trans. Comput. Logic, 16(3): 24:1–24:27, June 2015. ISSN 1529-3785. doi: 10.1145/2736696. [Cited on Section 1] Daniel Donnelly (https://cs.stackexchange.com/users/12373/daniel donnelly). Is the smallest grammar problem over the singleton alphabet known to be np-complete or ...? Computer Science Stack Exchange, 2025. URL https://cs.stackexchange.com/q/171713. URL:https://cs.stackexchange.com/q/171713 (version: 2025-04-12). [Cited on Section C] Alexey Ignatiev, Antonio Morgado, and Joao Marques-Silva. Cardinality encodings for graph optimization problems. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI’17, pages 652–658, Melbourne, Australia, August 2017. AAAI Press. ISBN 978-0-9992411-0-3. [Cited on Sections 1, 2.1, 2.1, 5, and 5] Stasys Jukna. Computational Complexity of Graphs. In Advances in Network Complexity, chapter 5, pages 99–153. John Wiley & Sons, Ltd, 2013. ISBN 978-3-527-67046-8. doi: 10.1002/9783527670468.ch05. [Cited on Sections 3.2 and 5] Petr Kucˇera, Petr Savicky´, and Vojteˇch Vorel. A lower bound on CNF encodings of the at-most-one constraint. Theoretical Computer Science, 762:51–73, 2019. ISSN 0304-3975. doi: 10.1016/j.tcs.2018.09.003. [Cited on Sections 1 and 2.1] Norbert Manthey, Marijn J. H. Heule, and Armin Biere. Automated reencoding of boolean formulas. In Haifa Verification Conference, pages 102–117. Springer, 2012. [Cited on Sections (document), 1, 3.1, and 3.1] Filip Mari´c. Timetabling based on sat encoding: a case study. https://poincare.matf.bg.ac.rs/ filip/phd/sattimetable.pdf, 2008. [Cited on Section 4.3] Jaishree Mayank and Arijit Mondal. Efficient SAT encoding scheme for schedulability analysis of nonpreemptive tasks on multiple computational resources. Journal of Systems Architecture, 110:101818, November 2020. ISSN 1383-7621. doi: 10.1016/j.sysarc.2020.101818. [Cited on Sections (document), 4.3, and 4.3] Dhruv Mubayi and Gy¨orgy Tur´an. Finding bipartite subgraphs efficiently. Information Processing Letters, 110(5):174–177, February 2010. ISSN 0020-0190. doi: 10.1016/j.ipl.2009.11.015. [Cited on Sections 2.2 and 5] Van-Hau Nguyen, Van-Quyet Nguyen, Kyungbaek Kim, and Pedro Barahona. Empirical study on satencodings of the at-most-one constraint. In The 9th International Conference on Smart Media and Applications, SMA 2020, page 470–475, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450389259. doi: 10.1145/3426020.3426170. URL https://doi.org/10.1145/3426020. 3426170. [Cited on Section 2.1] Sampo Niskanen and Patric ¨Osterg˚ard. Cliquer homepage — users.aalto.fi. https://users.aalto.fi/ ˜pat/cliquer.html, 2002. [Accessed 09-06-2025]. [Cited on Section 5] Steven Prestwich. CNF Encodings, chapter 2. IOS Press, February 2021. doi: 10.3233/faia200985. URL http://dx.doi.org/10.3233/FAIA200985. [Cited on Section 1] Long Qian, Eric Wang, Bernardo Subercaseaux, and Marijn J. H. Heule. Unfolding boxes with local constraints, 2025. URL https://arxiv.org/abs/2506.01079. [Cited on Section 1] Jussi Rintanen. Compact representation of sets of binary constraints. Frontiers in Artificial Intelligence and Applications, pages 143–147, Netherlands, 2006. IOS Press BV. ISBN 9781586036423. [Cited on Sections 1 and 5] Andre Schidler and Stefan Szeider. SAT-based Decision Tree Learning for Large Data Sets. Journal of Artificial Intelligence Research, 80:875–918, July 2024. ISSN 1076-9757. doi: 10.1613/jair.1.15956. [Cited on Section 1] Carsten Sinz. Towards an Optimal CNF Encoding of Boolean Cardinality Constraints. In Peter van Beek, editor, Principles and Practice of Constraint Programming - CP 2005, pages 827–831, Berlin, Heidelberg, 2005. Springer Berlin Heidelberg. ISBN 978-3-540-32050-0. [Cited on Section 1] Bernardo Subercaseaux and Marijn J. H. Heule. The Packing Chromatic Number of the Infinite Square Grid is 15. In Sriram Sankaranarayanan and Natasha Sharygina, editors, Tools and Algorithms for the Construction and Analysis of Systems - 29th International Conference, TACAS 2023, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022, Paris, France, April 22-27, 2023, Proceedings, Part I, volume 13993 of Lecture Notes in Computer Science, pages 389–406. Springer, 2023. doi: 10.1007/978-3-031-30823-9 20. [Cited on Sections 1 and 5] Bernardo Subercaseaux and Marijn JH Heule. The Packing Chromatic Number of the Infinite Square Grid Is at Least 14. In 25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022). Schloss Dagstuhl-Leibniz-Zentrum fu¨r Informatik, 2022. [Cited on Section 5] William J. Wesley. Lower Bounds for Book Ramsey Numbers, October 2024. [Cited on Section 1] Qinghua Wu and Jin-Kao Hao. A review on algorithms for maximum clique problems. European Journal of Operational Research, 242(3):693–709, May 2015. ISSN 0377-2217. doi: 10.1016/j.ejor.2014.09.064. [Cited on Section 5] Neng-Fa Zhou. Yet another comparison of sat encodings for the at-most- $\mathbf { \cdot k }$ constraint, 2020. URL https: //arxiv.org/abs/2005.06274. [Cited on Section 2.1] # A Proof of Proposition 20 Proposition 20. Let $\tau : v a r ( N I P _ { n } ) \{ \perp , \top \}$ and assignment. Then, we have that $$ \tau \vert = N I P _ { n } \iff S A T ( I P T _ { n } | _ { \tau } ) , $$ and moreover, any satisfying assignment $\theta$ for $I P T _ { n } | _ { \tau }$ must assign $\theta ( z _ { a , b } ) = \top$ if and only if there is some $[ i , j ]$ such that $\tau ( x _ { i , j } ) = \top$ and $[ a , b ] \subseteq [ i , j ]$ Proof. For the $( \implies )$ ) direction, we assume that $\tau \left. = { \mathsf { N I P } } _ { n } \right.$ , and the build the assignment $\theta : \mathsf { v a r } ( \mathsf { I P T } _ { n } | _ { \tau } ) \to$ $\{ \bot , \top \}$ as described in the statement of the proposition: $\theta ( z _ { a , b } ) = \top$ if and only if there is some $[ i , j ]$ such that $\tau ( x _ { i , j } ) = \top$ and $[ a , b ] \subseteq [ i , j ]$ . Then the clauses (1-4) can be easily checked to be satisfied by $\theta$ . For the $( \Longleftarrow )$ direction, we assume a satisfying assignment $\theta$ for $\mathsf { I P T } _ { n } \big | _ { \tau }$ and assume expecting a contradiction that $\tau \models \mathsf { N I P } _ { n }$ . Since $\mathsf { I P T } _ { n } \big | _ { \tau }$ is satisfiable, and $\mathsf { I P T } _ { n }$ contained the clauses $\overline { { t _ { \ell } } } \vee \bigvee _ { [ i , j ] \supseteq \{ \ell \} } x _ { i , j }$ whose variables were assigned by $\tau$ , the only possibility is that $\tau \not \in ( \overline { { x _ { i , j } } } \vee t _ { \ell } )$ for some $1 \leqslant i < j \leqslant n$ and $\ell \in [ i , j ]$ . Thus, we have that $\tau ( x _ { i , j } ) = \top$ and $\tau ( t _ { \ell } ) = \perp$ . But by the clauses of type (1), $\tau ( x _ { i , j } ) = \top$ implies $\theta ( z _ { i , j } ) = \top$ , so it suffices to prove that $\theta ( z _ { i , j } ) = \top$ contradicts $\tau ( t _ { \ell } ) = \perp$ . We do this by showing that $\theta ( z _ { i , j } ) = \top$ implies $\tau ( t _ { \ell } ) = \top$ , for any $\ell \in [ i , j ]$ . The proof is by induction over $d : = j - i$ . If $d = 1$ , then the clauses of type (2) directly give us $\tau ( t _ { \ell } ) = \top$ , and if $d > 1$ , the clauses of type (3) give us that $\theta ( z _ { i + 1 , j } ) = \top$ and $\theta ( z _ { i , j - 1 } ) = \top$ , and as $\ell$ belongs to either $[ i + 1 , j ]$ or $[ i , j - 1 ]$ , we conclude by the inductive hypothesis. Let us now show that no other $\theta$ works. Indeed, we first prove that $\tau ( x _ { i , j } ) = \top$ implies that $\theta ( z _ { a , b } ) = \top$ for every $[ a , b ] \subseteq [ i , j ]$ , by induction on $d : = ( a - i ) + ( j - b )$ . If $d = 0$ , then $a = i$ and $j = b$ , so by the clause of type (1) we are done. Otherwise, we need to prove that the statement for $d$ implies the case for $d + 1$ for any $d < j - i - 1$ (since $j - i - 1$ is the maximum possible value of $d$ ). We assume $\tau ( x _ { i , j } ) = \top$ and note that by the inductive hypothesis this implies $\theta ( z _ { a , b } ) = \top$ for any $1 \leqslant a < b \leqslant n$ such that $\begin{array} { r } { ( a - i ) + ( j - b ) = d } \end{array}$ Then, any interval $[ a ^ { \prime } , b ^ { \prime } ] \subseteq [ i , j ]$ with $( a ^ { \prime } - i ) + ( j - b ^ { \prime } ) = d + 1$ must be of the form $[ a + 1 , b ]$ or $[ a , b - 1 ]$ with $\begin{array} { r } { ( a - i ) + ( j - b ) = d } \end{array}$ , and as by the clauses of type (3) we have $\theta ( z _ { a + 1 , b } ) = \top$ and $\theta ( z _ { a , b - 1 } ) = \top$ , we are done. On the other hand, assume that $\theta ( z _ { a , b } ) = \top$ for some pair $1 \leqslant a < b \leqslant n$ , and let us show that $\tau ( x _ { i , j } ) = \top$ for some $[ i , j ] \supseteq [ a , b ]$ . This time the induction is over $d : = n - ( b - a )$ , with the base case $d = 1$ implying that $a = 1 , b = n$ , from where (4) directly yields $z _ { 1 , n } \to x _ { 1 , n }$ which proves the base case. For the inductive case, note that a clause of type (4) guarantees that either $\tau ( x _ { a , b } ) = \top$ , in which case we are done, or that either $\theta ( z _ { a - 1 , b } ) = \top$ or $\theta \big ( z _ { a , b + 1 } \big )$ hold. But $n - ( b - ( a - 1 ) ) = n - ( b + 1 - a ) = d - 1$ , and thus by inductive hypothesis we have that $\tau ( x _ { i , j } ) = \top$ for some $[ i , j ]$ such that $[ i , j ] \supseteq [ a - 1 , b ] \supset [ a , b ]$ or $[ i , j ] \supseteq [ a , b + 1 ] \supset [ a , b ]$ . □ # B Proof of Lemma 21 Lemma 21. Each edge $e : = \{ [ i _ { 1 } , j _ { 1 } ] , [ i _ { 2 } , j _ { 2 } ] \} \in E ( { \mathcal { I } } _ { n } )$ with $i _ { 1 } \leqslant i _ { 2 }$ must be part of exactly one of the following cases: 1. $B ( i _ { 1 } ) = B ( i _ { 2 } )$ and $B ( j _ { 1 } ) = B ( j _ { 2 } )$ , and $i _ { 2 } \leqslant j _ { 1 }$ , in which case we say 𝑒 is an $x$ -edge. 2. $B ( i _ { 1 } ) < B ( i _ { 2 } ) < B ( j _ { 1 } )$ , in which case we say $e$ is a 𝑦-edge. 3. $B ( i _ { 1 } ) = B ( i _ { 2 } )$ and $i _ { 2 } \leqslant j _ { 1 }$ , but $B ( j _ { 1 } ) \neq B ( j _ { 2 } )$ , in which case we say 𝑒 is an $s$ -edge. 4. $B ( i _ { 1 } ) < B ( i _ { 2 } ) = B ( j _ { 1 } ) = B ( j _ { 2 } )$ and $i _ { 2 } \leqslant j _ { 1 }$ , in which case we say 𝑒 is an $f$ -edge. 5. $B ( i _ { 1 } ) < B ( i _ { 2 } ) = B ( j _ { 1 } ) \neq B ( j _ { 2 } )$ , and $i _ { 2 } \leqslant j _ { 1 }$ , in which case we say 𝑒 is an 𝑚-edge. Moreover, any tuple $( i _ { 1 } , j _ { 1 } , i _ { 2 } , j _ { 2 } )$ with $i _ { 1 } \leqslant i _ { 2 }$ that satisfies one of these cases implies $\{ [ i _ { 1 } , j _ { 1 } ] , [ i _ { 2 } , j _ { 2 } ] \} \in$ $E ( { \mathcal { I } } _ { n } )$ . Proof. First, observe that in all cases we are assuming $i _ { 1 } \leqslant i _ { 2 }$ , and thus as all cases except (2) explicitly require $i _ { 2 } \leqslant j _ { 1 }$ , they all imply the intersection of the intervals $[ i _ { 1 } , j _ { 1 } ]$ and $[ i _ { 2 } , j _ { 2 } ]$ is non-empty. For case (2), note that $B ( i _ { 2 } ) < B ( j _ { 1 } )$ implies $i _ { 2 } < j _ { 1 }$ , which together with $i _ { 1 } \leqslant i _ { 2 }$ implies again that the intersection of the intervals $[ i _ { 1 } , j _ { 1 } ]$ and $[ i _ { 2 } , j _ { 2 } ]$ is non-empty. Thus, all cases correspond to actual edges in the graph ${ \mathcal { I } } _ { n }$ . To see exhaustiveness, we present in Figure 4 a decision tree that cases on the relationships between the blocks. □ Furthermore, as a sanity check, we present in Code 1 a Python script that validates the correctness for some finite values. def B(i, b): 2 # ceil(i/b) can be written as $( \mathrm { ~ i ~ } + \mathrm { ~ b ~ } - \mathrm { ~ } 1 ) / / \mathrm { b }$ 3 return $( { \\\dot { \textbf { 1 } } } + { \textbf { b - 1 } } ) \cdot$ ) // b 4 5 def classify(i1, j1, i2, j2, b): 6 ”””Return which case (1,2,3,4) the edge [i1,j1],[i2,j2] belongs to.””” 7 bi1, bi2 $\mathbf { \tau } = \mathbf { \tau }$ B(i1,b), B(i2,b) 8 bj1, bj2 $\mathbf { \sigma } = \mathbf { \sigma }$ B(j1,b), B(j2,b) 9 10 # the five predicates from the lemma 11 is x = ( ${ \mathrm { b i } } 1 \ = = \ { \mathrm { b i } } 2 { \mathrm { . } }$ ) and $( \mathsf { b j } 1 \ \mathsf { \Omega } = \mathsf { b j } 2 )$ 12 is y $\mathbf { \Sigma } = \mathbf { \Sigma }$ $\left. \mathsf { b i } 1 < \mathsf { b i } 2 \right.$ ) and $( \mathsf { b i } 2 \ < \ \mathsf { b j } 1$ ) 13 is s $\mathbf { \sigma } = \mathbf { \sigma }$ $\mathbf { \dot { b } } \mathbf { i } 1 \ \mathbf { \dot { \mathbf { 1 } } } = \mathbf { \dot { \mathbf { 1 } } } \mathbf { b } \mathbf { i } 2 \mathbf { \dot { \mathbf { \mu } } } ,$ ) and $( { \mathsf { b } } { \mathsf { j } } 1 \ \mathrel { \ } ! = \ { \mathsf { b } } { \mathsf { j } } 2 )$ 14 is f $\mathbf { \sigma } = \mathbf { \sigma }$ $( \mathsf { b i } 1 < \mathsf { b i } 2 )$ and $( \mathsf { b j } \mathsf { 1 } \ \mathsf { \Pi } = \mathsf { b j } 2 )$ ) and $( { \mathsf { b i } } 2 \ { \mathsf { \Omega } } = { \mathsf { \Omega } } { \mathsf { b j } } 1 )$ 15 is m $\mathbf { \tau } = \mathbf { \tau }$ ( $\mathbf { \dot { b } } \mathbf { i } 1 < \mathbf { b } \mathbf { i } 2 ,$ ) and ( $\mathbf { \ b i } 2 \ = \ \mathbf { \ b j } 1 ^ { \cdot }$ ) and (bj1 ! $\mathbf { \tau } = \mathbf { \tau }$ bj2) 16 17 flags $\mathbf { \tau } = \mathbf { \tau }$ [is x, is y, is s, is f , is m] 18 if sum(flags) $\mathit { \Theta } < \mathit { \Theta } 1$ : 19 # either none or more than one case matched 20 return flags 21 return flags.index(True) $\mathbf { \Sigma } + \mathbf { \Sigma } \mathbf { 1 }$ # return 1,2,3,4, or 5. 22 23 def check lemma(n, b): 24 bad $\mathbf { \Sigma } = \mathbf { \Sigma } [ \mathbf { \Sigma } ]$ 25 for i1 in range(1, $\mathsf { n } { + } \mathsf { 1 }$ ): 26 for i2 in range(i1, $\mathsf { n } { + } \mathsf { 1 }$ ): 27 for $\mathtt { j } 1$ in range( $\mathrm { i } 1 { + } 1$ , $\mathsf { n } { + } \mathsf { 1 }$ ): 28 for ${ \dot { \mathsf { J } } } 2$ in range( $\mathrm { i } 2 \substack { + 1 }$ , $\ n { + } 1 .$ ): 29 # only look at intersecting intervals 30 if $\mathrm { i } 2 \ < = \ \mathrm { j } 1$ : 31 $\mathsf { c l s } =$ classify(i1, j1, i2, j2, b) 32 if cls is None or type(cls) is not int: 33 bad.append((i1,j1,i2,j2)) 34 if bad: 35 print(”Found unclassified or multiply−classified edges:”) 36 for quad in bad: 37 print(quad, $^ { \prime \prime } \mathrm { - } >$ (blocks)”, [B(q, b) for $\mathsf { q }$ in quad]) 38 else: 39 print( $\pm ^ { \prime \prime } \mathsf { A l } 1$ edges classified correctly! $( n = \{ n \} , b = \{ b \} ) ^ { \prime \prime } ,$ 40 41 if name $\scriptstyle = =$ ” main ”: 42 # Example for $n ~ = ~ 5 \Theta$ , block size of 10. 43 check lemma( $n = 5 \Theta$ , $b = 1 0$ ) 44 45 # Example without exact divisibility 46 check lemma $\scriptstyle \mathtt { \tilde { m } } = 5 0$ , $b = 8$ ) Figure 4: A decision tree for the cases of Lemma 21. # C Application to String Compression with SLPs Note that in the problem of Bannai et al. (2022), the intervals can overlap if one is strictly contained in the other. This can be achieved by minor modifications in the indices of our constraints, without any new conceptual ideas. We display some preliminary experimental results in Table 2, where we show the usage of the $O ( n ^ { 8 / 3 } )$ encoding for disjoint intervals in replacement of constraint 8 of the SLP encoding of Bannai et al. (2022). Our experiments are over families of string that make standard examples for string compression, and their descriptions can be found in the paper of Bannai et al. (2022), and the concrete strings are publicly available in their repository: https://github.com/ $\operatorname { k g } 8 6$ /satcomp. Perhaps the most striking example is that of Fibonacci binary strings, defined recursively as $F _ { 0 } = 0$ , $F _ { 1 } = 1$ , and $F _ { n } = F _ { n - 1 } F _ { n - 2 }$ (concatenation) for $n \geqslant 2$ , where as shown in Table 2, the improved encoding reduces the number of clauses from 118 millions to 12 millions (fib12.txt). Generally speaking, the impact of the encoding for the disjoint (or strictly contained) intervals is not always as large as the asymptotics suggest, since the constraint in the encoding from (Bannai et al., 2022, Equation (8)) are only applied based on a condition that depends on the repetitiveness of the input string. The impact of the encoding is thus maximal for strings of the form $a ^ { n }$ , for some symbol $a$ These strings are interesting from a theoretical point of view, since the complexity of the smallest SLP (which is NP-hard to compute over arbitrary strings) is not known (https://cs.stackexchange.com/users/12373/daniel donnelly). Table 1: Comparison of encoding methods for the independent-set property of ${ \mathcal { I } } _ { n }$ . We use the acronym ‘bd’ for the $O ( n ^ { 8 / 3 } )$ encoding, standing for “block decomposition”. Table 2: Comparison of encodings for smallest SLPs
We show how several graph problems (e.g., vertex-cover, independent-set, $k$-coloring) can be encoded into CNF using only $O(|V|^2 / \lg |V|)$ many clauses, as opposed to the $\Omega(|V|^2)$ constraints used by standard encodings. This somewhat surprising result is a simple consequence of a result of Erd\H{o}s, Chung, and Spencer (1983) about biclique coverings of graphs, and opens theoretical avenues to understand the success of "Bounded Variable Addition'' (Manthey, Heule, and Biere, 2012) as a preprocessing tool. Finally, we show a novel encoding for independent sets in some dense interval graphs using only $O(|V| \lg |V|)$ clauses (the direct encoding uses $\Omega(|V|^2)$), which we have successfully applied to a string-compression encoding posed by Bannai et al. (2022). As a direct byproduct, we obtain a reduction in the encoding size of a scheduling problem posed by Mayank and Modal (2020) from $O(NMT^2)$ to $O(NMT + M T^2 \lg T)$, where $N$ is the number of tasks, $T$ the total timespan, and $M$ the number of machines.
[ "cs.LO", "cs.AI", "cs.DS" ]
# 1. Introduction Most NoSQL systems follow a “schema-on-read” approach, allowing data to be stored without a predefined schema. While this schemaless1 nature grants developers the flexibility to handle frequent changes in data structures — common in modern applications — it also introduces a key challenge: building database utilities such as schema visualization, code generation, and query optimization typically requires knowledge of the underlying data structure. As highlighted in [2], NoSQL database tools should incorporate reverse engineering strategies to extract the implicit schema from code or data. NoSQL systems are commonly classified into four categories based on their underlying data model: columnar, document, key–value, and graph as noted by [23]. In the first three models, aggregation tends to prevail over references between data, as discussed in [11, 23]. Specifically, key–value systems store data in associative arrays, document-oriented systems use JSON-like documents, and columnar systems organize data in tables where a column may contain aggregates of other columns. In contrast, graph databases are designed to store highly connected data, where the primary focus is on relationships between entities, and the aggregation of internal data is less relevant or sometimes unnecessary. Both static code analysis of database applications and stored data analysis are traditional techniques to extract information from databases. In the context of NoSQL schema extraction, several data analysis strategies have been proposed. Most of these approaches target a single data model. For instance, document stores such as MongoDB $^ 2$ are addressed in [19, 24, 27], graph databases like Neo4J $^ 3$ , are considered in [8], and columnar databases such as HBase $^ 4$ are covered in [13]. More recently, Carlos J. Fern´andez-Candel et al. presented a strategy based on the U-Schema unified metamodel, which can represent schemas for both relational and NoSQL databases [11]. This metamodel differs from the schema representations used in previous proposals in three main aspects: (i) it supports both aggregation and reference relationships between entity types; (ii) it explicitly models relationship types, as required in graph schemas or many-to-many tables in relational schemas; and (iii) it allows entity and relationship types to have multiple structural variations, which is essential in schemaless environments where a single fixed structure per type is not enforced. As far as we know, no approaches applying code analysis to infer NoSQL complete schemas have been published. However, static code analysis of MongoDB applications has been used for related purposes, such as discovering entity evolution and detecting database access operations [20, 5]. [20] presented a strategy aimed at tracking how the schema of a specific entity type evolves throughout an application’s lifecycle. Their approach analyzes different versions of Java applications to identify structural changes over time —i.e., entity versions. However, it does not extract a complete database schema, as relationships between entities are not identified. Boris Cherry et al. proposed a method to detect MongoDB access operations in JavaScript applications. Their work focuses on locating access points—such as queries, insertions, updates, and deletions—across diverse codebases. While data reverse engineering was used in [11] to infer NoSQL logical schemas as U-Schema models, in this work we present a static code analysis strategy with the same objective. A relevant distinction should be made between both approaches. Data-driven strategies are well suited to detecting structural variability in schema types, whereas identifying such variability through code analysis is considerably more complex. Therefore, our strategy is complementary to that presented in [11], as both approaches extract NoSQL logical schemas as U-Schema models. In our case, schema type versions can be detected in a manner similar to the approach of [20], by analyzing multiple versions of the application over time. Our reverse engineering solution is implemented as a three-step model transformation chain. First, the source code is injected into a model that captures the structure of programming language statements. From this code model, a control flow model is derived, preserving references to the original code elements. This control flow model is then traversed to generate a model that captures CRUD operations and the physical structure of the data, referred to as the Database Operations and Structure (DOS) model. Finally, the U-Schema logical schema is obtained from the physical schema embedded within the DOS model. Beyond schema extraction, we also explored the use of the DOS model to support the automation of database refactorings in NoSQL environments. In particular, we focused on identifying join queries to provide database practitioners 5 with actionable information for deciding whether a specific join can be eliminated by duplicating fields from the referenced entity into the referencing one. This operation corresponds to a well-known denormalization technique used in relational databases to optimize query performance—for instance, by storing detail records together with master data. A similar strategy is considered a best practice in document databases such as MongoDB $^ 6$ , especially when applications perform queries involving references (i.e., joins) between two document collections. Once a candidate join is selected for removal, the corresponding schema change is automatically applied: the schema, data, and source code are updated accordingly. Each step of the reverse engineering process was tested by verifying the correctness of the generated models. To this end, several validation strategies were applied, including: (i) rewriting the source code represented in the models, (ii) generating textual representations to facilitate the identification of database-related statements; and (iii) visualizing the control flow and the extracted database schema using tables and graphs. The overall process was validated through a round-trip experiment, in which JavaScript code accessing a MongoDB store was analyzed. This code is part of an automatically generated Node.js application. The extracted logical schema was compared both with the one originally designed, as well as with the one schema inferred directly from the stored data. Our work contributes to the state of the art in the following ways: • To the best of our knowledge, this is the first code analysis approach capable of extracting NoSQL logical schemas, where entity types are connected through both aggregation and reference relationships. Additionally, the approach supports relationship types. By representing schemas using a generic metamodel, our method becomes applicable to multi-model database tools, enabling broader use in diverse database environments. • The extracted information in the devised reverse engineering process also enables the automation of database refactorings. In this paper, we illustrate this capability with a strategy to duplicate fields in order to eliminate the need for expensive joins, thus improving query performance. • The proposed reverse engineering process leverages metamodel-based abstractions [3] to represent the involved information: source code, control flow, database operations, and data structures. The metamodels were designed to ensure platform independence and reusability. In particular, the code metamodel captures common elements of object-oriented languages and can be extended with language-specific constructs. • A controlled round-trip validation strategy was conducted. A large language model (LLM) was used to generate a Node.js application from a database schema. This validation allowed for precise comparison between the extracted schema and the original design, and enabled the analysis of join patterns and the verification of refactoring outcomes in a realistic yet reproducible setting. The full implementation of our approach, which includes all metamodels, model-to-model transformations, and code analysis algorithms used for generating intermediate models, extracting schemas, and automating refactorings, is publicly available on GitHub 7. This paper is structured as follows. Section 2 provides an overview of the proposed approach, which is described in detail across Sections 3 to 6. Section 3 presents the internal representation of source code using the Code and Control Flow metamodels. Section 4 describes the construction of the DOS model from these representations. Section 5 explains how U-Schema models are derived from DOS models, while Section 6 focuses on the identification of join queries and the application of field duplication refactorings. All metamodels and algorithms are described in detail, including the testing strategies applied to each of them. Section 7 presents the overall validation of the complete approach. Finally, Section 8 reviews related work, and Section 9 concludes the paper and outlines future research directions. # 2. Overview of the approach This section outlines the approach presented in this paper. A running example involving an operation on a document store is used to illustrate the kind of information that can be discovered through code analysis to extract the logical schema and automate database refactoring. In document stores, semi-structured objects are stored in collections, and each object has a JSON-like structure consisting of a set of name–value pairs. The value of a field can be a primitive type, an array of objects, or an embedded object. For example, Figure 1 shows sample user and movie objects in a document store used by a streaming service to manage subscribed users, including personal information and a list of watched movies. User objects contain a watchedMovies field, which holds an array of embedded objects. Each embedded object includes three fields: id, which records the identifier or key; movie id, which stores the identifier of the watched movie (i.e., a reference); and stars, which contains the user’s rating of that movie. // User Collection " name ": " Brian ", " surname ": " Caldwell ", " email ": " brian_caldwell@gmail .com ", " watchedMovies ": [ { " stars ": 7, " movie_id ": 202 }, { " stars ": 10, " movie_id ": 303 } ] } // Movie Collection { " _id ": 202 , " title ": " The ␣ Matrix " " director ": " The ␣ Wachowskis " } { " _id ": 303 , " title ": " The ␣ Godfather " " director ": " Francis ␣ Ford ␣ Coppola " } Listing 1 presents a pseudo-code example of an operation referred to as “First-Watched-Movie” (FWM), which consists of three statements. First, a query selects a user by name. Then, a join query retrieves the first movie watched by that user. Finally, if the user rated that movie with a number of stars greater than or equal to 5, the following information is printed to the console: the user’s full name, as well as the title and star rating of the retrieved movie. Listing 1: Pseudo-code of the FWM database operation. To facilitate code analysis, a more abstract representation is derived from the AST produced by a parser. We define a set of interconnected models: one that captures the control flow of the program, and another that represents its abstract syntax. Nodes and edges in the Control Flow model are linked to elements in the Code model, enabling the analysis of execution paths while preserving structural context. The process begins by injecting the source code into a Code model, from which the Control Flow model is constructed, as described in Section 3. Extracting the logical schema and automating database refactorings requires identifying both the physical structure of the data and the CRUD operations performed on it. This information is captured in the DOS model, which is obtained by traversing the Control Flow model, as explained in Section 4. Specifically, the traversal must visit relevant statements to identify the following elements: • Data containers (e.g., collections in document stores or tables in relational or columnar databases). They can be identified through the analysis of CRUD operations, e.g., by inspecting the argument of db() calls in the queries of the FWM example. • Structure of objects stored in a particular container, defined as a set of properties represented by name–type pairs. Properties are inferred from object fields, which can be detected in expressions that use dot notation to access a variable’s attributes, such as user.name or movie.title. • Variables holding database objects. These are discovered by analyzing expressions such as method invocations, assignments, and arguments. For example, the variables user and movie are detected in the assignments found in statements 1 and 2 of the FWM example. • Types of properties. Properties can be of primitive type, collection (e.g., array or list), aggregate, or reference. Primitive types are inferred from expressions such as assignments or conditions (e.g., name $\scriptstyle \mathbf { \mu = } \mathbf { \ " { B r i a n } ^ { \prime \prime } }$ ). An array type is detected when elements are accessed using index notation. For example, analyzing the expression user.watchedMovies [0].movie id reveals that the watchedMovies property is an array. Furthermore, it indicates that array elements are objects containing the movie id property. In such cases, the type of watchedMovies is inferred as an array of an aggregate type, typically named after the field in singular form. We use the term non-root entities to refer to these aggregate types, distinguishing them from root entities corresponding to data containers (e.g., user and movie). • CRUD operations. These are detected by identifying calls to database API functions. In the FWM example, assuming the query() method issues a read operation on the store specified by the db() call, two read operations would be identified in statements 1 and 2. • Reference and join queries. A query is identified as a join when its condition includes an equality check between the identifier field of one object and a field of a previously retrieved object. In such cases, the latter field is considered a reference, and the corresponding property is assigned a reference type. In the FWM example, the query on movies (statement 2) qualifies as a join query, and the type of the movie id property is inferred as “Reference to Movie.” Figure 2 illustrates the information contained in the DOS model for the pseudo-code of the FWM example. The physical data structure is shown at the top of the figure, while the queries appear at the bottom. Note that the types of the fields surname and email in User, and title and director in Movie, cannot be determined directly from the pseudo-code. Since these fields only occur in print statements, the default type String is assigned to them. In our approach, the physical data schema is then transformed into a logical schema represented using the U-Schema generic metamodel, as described in Section 5. Figure 2: Entities and Queries extracted for the FWM example. The DOS model can also be used to detect candidate database refactorings. In this paper, we illustrate this capability by focusing on the join query removal refactoring. A join query involves four elements: a source container, a target container, a query on the source container, and the join condition used to select the object from the target container. Removing a join query is possible if the relevant properties of the target entity are copied (i.e., duplicated) into the source entity. In this way, the query on the source container becomes sufficient to retrieve all the required information. However, not all properties of the target entity need to be copied, only those that are actually accessed in the code following the join query. In our example, the title field should be copied into the WatchedMovie objects, but not the director field, since the statement following the query accesses movie.title but not movie.director. Therefore, we propose a code analysis approach to identify the data that should be duplicated for each join query. To this end, the list of join queries is iterated, and for each one, subsequent statements are inspected to determine which fields need to be copied. This analysis provides database practitioners with with actionable insights to support decisions on which join queries can be removed. These insights includes: the source and target containers, the join query itself, the original and modified versions of the query on the source container, the number of lines in which the retrieved data is used, and other queries involving the same containers —allowing database practitioners to assess how frequently the duplicated data is updated. All the information collected during this analysis is referred to as a “join query removal plan”. Like any schema change operation, the data duplication involved in a join query removal refactoring requires updating the schema, the database, and the application code. We have automated this process as follows: (i) the logical schema is modified by adding the duplicated attributes from the referenced entity to the referencing entity; (ii) the database is updated by inserting the duplicated fields to all referencing objects; and (iii) the code is rewritten to remove the join query and to replace all references to the duplicated fields with direct accesses to the updated object. In the case of the FWM script: (i) the title attribute is added to the WatchedMovie entity type in the schema, through an operation on the U-Schema model; (ii) the watchedMovies array of each user object is updated so that each embedded object includes the title field from the referenced movie; and (iii) the FWM script is rewritten as shown in Listing 2. Listing 2: Pseudo-code updated when join query is removed. Figure 3 illustrates the sequence of stages in the strat- 56 egy outlined above. The source code is first injected into a 7 Code model, from which a Control Flow model is derived. 8 This Control Flow model is then analyzed to generate the10 DOS model, which serves as input to two subsequent pro-11 cesses: one that transforms the physical schema into a log-12 ical schema, and another that generates join query removal14 plans. These plans are presented to database practition-15 ers, who select which ones to apply. For each selected plan,16 the schema, database, and application code are updated178 accordingly. 19 JavaLSisctriinpgt 3wphriecshenwtisl tbhe uFseWdMaspasreundnoi-cnogdexaexmprlessiendthine201 following sections. We assume that MongoDB is the un-23 derlying document store. Lines 1 to 7 initialize the client24 variable, which holds the client-side connection to a MongoDB database. Line 9 marks the beginning of the code corresponding to the pseudo-code shown in Listing 1. It is implemented as a findOne() query on the Users collec Figure 3: Overview of the U-Schema extraction and join query removal approach. Listing 3: JavaScript code for the pseudo-code in Listing 1 (data stored in MongoDB). tion, which takes two arguments: a query condition and a callback function (i.e., a lambda expression) whose parameter is the object returned by the query. This callback includes a nested findOne() query on the Movies collection. In this inner query, the first argument defines the join condition, while the second is another callback containing an if-then statement. The body of this statement consists of three consecutive console.log statements. As a result, the example involves a nesting of three code blocks. const MongoDB = require (’mongodb ’). MongoClient ; const url = ’mongodb :// modelum .es/db :27017 ’; const dbName $\mathbf { \sigma } = \mathbf { \sigma }$ ’ streamingservice ’; const client $\mathbf { \tau } = \mathbf { \tau }$ new MongoDB ( url ); client . connect ( err $\Rightarrow$ { client .db( dbName ). collection (’users ’). findOne ( { name : ’Brian ’ }, (err , user ) $\Rightarrow$ { client .db( dbName ). collection (’movies ’). findOne ( { _id: user . watchedMovies [0]. movie_id }, (err , movie ) $\Rightarrow$ { if ( user . watchedMovies [0]. stars $> = ~ 5$ ) { console .log( user . name $^ +$ ’␣’ $^ +$ user . surname ); console .log( user . email $^ +$ ’␣ Last ␣ watched ␣ movie :’); console .log( movie . title + user . watchedMovies [0]. stars ); } }); }); }); # 3. Obtaining an abstract representation of the code In this section, we describe how the source code to be analyzed is internally represented using two complementary metamodels: the Code metamodel, which captures the structural elements of the program, and the Control Flow metamodel, which models the execution order of its statements. These representations provide the foundation for subsequent analysis stages, including the identification of database operations and the extraction of the schema. # 3.1. From source code to Code models Instead of relying directly on the abstract syntax tree (AST) of the source code, we define the Code metamodel to represent code in a more abstract and uniform way, independent of the concrete syntax. An excerpt of this metamodel is shown in Figure 4, which illustrates the main concepts and relationships common to object-oriented languages, and specifically includes the JavaScript statements relevant to our work. This decision aligns with the modeldriven engineering principle of building metamodels to define the abstract syntax of software languages [12, 14, 26]. In designing the Code metamodel, we were primarily inspired by the Java MODISCO metamodel [4], and to a lesser extent by the Code package of KDM [21]. It is important to note that our metamodel was not tailored to any specific object-oriented language. Instead, languagespecific elements are defined in separate extension metamodels, allowing for a modular architecture that can be easily adapted to different programming languages. This design also ensures that the most common statements are centralized in the core Code model, promoting consistency and reuse across language variants. Next, we describe the Code metamodel in sufficient detail to explain how Control Flow models are derived. A Code model represents an executable unit, such as a JavaScript script or program. It aggregates Containers, global variable declarations (VariableDecl), and Types, whic can be either primitive types or classes. A Container represents a structural unit that holds scripts or classes, such as packages, namespaces, folders, or files. Containers can be nested and may aggregate CodeContainers, which in turn group code blocks, class declarations, and variable declarations. A CodeBlock contains an ordered list of Statements, such as conditionals or loops, and also declares local variables. Figure 4 shows only the types of statements that appear in the running example: conditional selection, method calls, variable accesses, and object creation expressions. A special kind of code block is the CallableBlock, which represents executable blocks invoked by the program, such as methods, functions, constructors, and lambda expressions. Code models are obtained (injected) from the AST provided by a parser. Currently, we have implemented an injector for JavaScript, using the Esprima parser $^ 8$ . This parser provides the AST as a set of JSON documents. Notably, each JSON node includes a class field that clearly identifies the type of statement or expression it represents, which facilitates the mapping process. As a result, the correspondence between the Esprima AST elements and the elements of the Code metamodel is simple and direct in most cases, requiring minimal transformation logic. Figure 5 shows an excerpt of the Code model injected from the JavaScript code of the running example. This figure omits the portion of the model starting from the CallableBlock element, which represents the lambda expression passed as the second argument of the findOne() query. The injected model has a CodeModel as its root element, which aggregates the container created for the script file runningExample.js. This container stores the absolute path of the file. For this script, a CodeContainer of type “script” is created. This container aggregates a single CodeBlock that corresponds to the script’s body, and it represents the dot notation expression that includes the outer findOne() call. In this expression, the client variable is accessed to invoke the db method with the dbname argument (another VariableAccess element). That call is then chained to the collection method with the string literal ’Users’ as an argument, and finally to the findOne() call. This last call takes two arguments: a lambda expression represented by an anonymous CallableBlock, and an object creation expression that includes the property ’name’ and the literal value ’Brian’. The complete Code model includes two Class elements that represent the user and movie objects. Each class defines a set of properties: movie includes id, title, and director; while user includes name, surname, email, and watchedMovies. Testing. To validate this stage, we applied the testing strategy defined in [9], which describes a model-driven software reengineering process. A set of simple tests was performed on minimal code snippets, each containing only the essential instructions needed to represent a specific code construct (e.g., a loop). In each test, the code was automatically regenerated from the Code model and then compared with the original code, taking formatting into account. This approach enabled an iterative development process, where the injection logic for a single statement type was implemented and tested in each iteration. A text comparison tool, such as the standard ’diff’ utility on Ubuntu, was used to compare the original and regenerated code. It is worth noting that the incremental development of the Code metamodel prevented us from using a model-based language workbench to automatically generate the model injector from an EBNF-like JavaScript grammar specification. This decision was motivated by the complexity and large size of the metamodel. # 3.2. Representing the control flow Code analysis typically requires not only a representation of the syntax tree, but also knowledge of the control flow graph. To represent control flow in our approach, we have defined the metamodel shown in Figure 6. This metamodel was designed based on the representation proposed in the algorithm described in [1]. A Control Flow model is derived from a Code model, with its nodes and edges referencing the corresponding statements in the Code model. Both models serve as input to the code analysis process described in the following section. It is important to note that this representation is not a model of the program’s runtime behavior, but rather a control flow model—that is, a structural abstraction of the code’s possible execution paths. This distinction is particularly relevant in the case of JavaScript, where asynchronous constructs may cause the actual execution order to diverge from the control structure represented in the model. Figure 4: Excerpt of the main elements of the Code Metamodel. Figure 5: Excerpt of the Code model extracted for the running example (starting from line 9). As shown in Figure 6, a Control Flow model contains a set of code subgraphs, each representing either a code block or a callable unit (e.g., a method or function). Each subgraph consists of nodes corresponding to statements, which are connected by directed edges. Every node may have outgoing and incoming edges. Edges represent either unit calls or conditional branches. Accordingly, nodes hold references to Statement elements in the Code model, while edges reference either method calls or conditional expressions, also defined in the Code model. Since the Control Flow model is explicitly linked to the Code model from which it was derived, code analysis can traverse control path to access corresponding statements and perform higher-level reasoning over the program structure. Since the Control Flow model is explicitly linked to the Code model from which it is derived, code analysis can traverse control paths to access corresponding statements and Figure 6: Control Flow Metamodel. Data: codeModel $\because$ Code model Result: cfModel : Control Flow Model 1 cfModel ← createControlF lowModel() 2 codeBlocks ← getCodeBlocks(codeModel) 3 4 foreach $c B l o c k \in c o d e B l o c k s$ do 5 $s G r a p h \gets c r e a t e S u b G r a p h ( c o n t r o l F l o w )$ 6 $s N o d e \gets c r e a t e S t a r t N o d e ( s G r a p h )$ 7 $l a s t N o d e \gets s N o d e$ 8 $e N o d e \gets c r e a t e E n d N o d e ( )$ 9 foreach $s t \in c B l o c k$ .statements do 10 node createConnectedNode(st, st.variables) 11 createEdge(lastNode, node) 12 lastNode node 13 end 14 end 15 16 Function branch(st, lastNode): 17 foreach $n e w S T \in s t$ .statements do 18 node $$ createNode(newST, newST.variables) 19 createEdge(lastNode, node) 20 lastNode $$ node 21 end 22 endNode createEndNode() 23 createEdge(lastNode, endNode) 24 return endNode 25 Function createConnectedNode(st, variables): 26 node createNode(st, variables) 27 if hasArguments(st) then 28 foreach $a r g \in s t$ .arguments do 29 createEdge(node, createNode(arg, arg.variables)) 30 end 31 else if isSelection(st) then 32 bNode createStartNode(node) 33 foreach $c a s e \in s t$ .cases do 34 nNode branch(case.statements, bNode) 35 createEdge(bNode, nNode) 36 end 37 else if $\it { i s T r y ( s t ) }$ then 38 bNode createStartNode(node) 39 foreach catch $\in { \mathit { s t } }$ .catchs do 40 nNode branch(catch.statements, bNode) 41 createEdge(bNode, nNode) 42 end 43 else if $i s L o o p ( s t )$ then 44 nNode branch(loop.statements, node) 45 createEdge(nNode, node) 46 end 47 return node 48 Algorithm 1: Control Flow Construction Algorithm. perform higher-level reasoning over the program structure. To generate Control Flow models, we adapted the algorithm described in [1]. In our case, the input is a Code model rather than an abstract syntax tree (AST), and the output is a corresponding Control Flow model. To this end, we defined Algorithm 1, which traverses the input Code model and creates the corresponding nodes and edges in the output Control Flow model. The specific types of nodes and edges generated depend on the kind of element being processed in the Code model. Algorithm 1 operates as follows. First, the root element of the ControlFlowModel is created (line 1), and all code blocks (CodeBlocks and CallableBlocks) in the Code model are retrieved and collected into a list that is iterated over (lines 2 and 4). For each code block, a corresponding SubGraph is created and initialized with its start and end nodes (lines 5–8). The statements within each code block are then iterated (line 9), and a node is created for each statement using the createConnectedNode function (line 10), which internally calls createNode() (line 25). Each node maintains a reference to the corresponding code statement and the variables used in it. By default, each newly created node is connected to the previously created node (line 11). However, if the current statement is a conditional, loop, or exception trigger, the createConnectedNode function creates new subgraphs to represent the nested statements before establishing the corresponding control flow edges (lines 26–46). As an example, a conditional statement is handled as follows (lines 31–36): First, the initial decision node is created (line 32); then, a new branch is created for each Case (line 34), and a conditional expression edge is added to hold the condition for each branch (line 35). The Branch function is responsible for instantiating the nodes that belong to a specific branch, that is, the statements contained within the code block associated with that branch (lines 17 to 21), and also for creating the final node of the branch (line 22). Finally, an edge is created to connect the last node in the flow to the graph’s end node (line 23). The Control Flow model generated from the Code model of the running example is shown in Figure 7. The ControlFlowModel root element aggregates three subgraphs: The first, a CodeBlockSubGraph, corresponds to the findOne() method call chain client.db(dbName).collection (’Users’).findOne(name:name), (nested lambda expression). This subgraph therefore contains three “call” nodes in addition to the start and end nodes. The third “call” node has an outgoing edge pointing to the start node of a CallableBlockSubGraph, which corresponds to the findOne() method call chain: client.db(dbName).collection (’Movies’).findOne( id:user.watchedMovies[0].movie (nested lambda expression). This second subgraph also contains three “call” nodes, with the third node connected to the start node of a third CallableBlockSubGraph. This last subgraph includes a selection node, which leads to three additional “call” nodes corresponding to the three console.log statements within the conditional block. Testing. We validated this second step by visually verifying that the generated models accurately represent the control flow of the code. To facilitate this task, the models were stored in a Neo4J graph database, allowing us to leverage the Neo4J Browser 9, which displays graph query results as navigable visual graphs. Figure 8 shows an excerpt of the graph corresponding to the Control Flow model of the running example. Each node is labeled with a snippet of code. In particular, the graph illustrates the control flow of the if-then statement, which includes a block of three console.log statements. The if node has two outgoing edges: one of type Selection, representing the condition, and another of type Jump, indicating the continuation point if the condition is not met. Each printrelated node includes an outgoing edge labeled NEXT, which represents the sequential execution flow, and an additional edge labeled argument, pointing to a node that represents the argument of the print statement. These Neo4J visualizations are significantly more readable and intuitive than the raw Control Flow model, thereby facilitating the validation process. The mapping from the Control Flow model to the Neo4J database is straightforward, and the generation code is produced automatically using the same mechanism employed during the validation of the Code model injection. # 4. Finding Operations and Structure of the database Code and Control Flow models are analyzed to discover the implicit database schema and to apply database refactorings. In the first step of the analysis, information about CRUD operations and the structure of the manipulated data is captured in an intermediate representation. This representation is defined by the Database Operation and Structure (DOS) metamodel, as shown in Figure 9. In this section, we first present the algorithm designed to obtain a DOS model. In the following section, we describe how the DOS model is transformed into a U-Schema model, that is, a unified logical schema. The subsequent section illustrates how the DOS model can also be used to support the join query removal refactoring, aiming at improving query efficiency. The root element of the DOS metamodel is DOSmodel, which aggregates OperationDatabase and Container elements, as shown in Figure 9. A Container represents a d,ata storage unit, such as a collection in a document store or a table in a relational database. Each container holds one or more DataStructure elements, which in turn aggregate the set of fields present in the objects stored within that container. Since NoSQL systems are often schemaless, a container may include multiple data structures to manage structural variations. However, as noted in Section 1, our approach does not address the detection of structural variability. Each Field has a name and a type. The type can be one of the following: Attribute, Collection, Aggregate, or Reference. An Aggregate type encapsulates a DataStructure that represents the internal structure of embedded objects within a root object. A Reference type links to attributes belonging to another data structure, indicating a reference relationship between entities. With respect to database operations, the metamodel defines a specific subclass of DatabaseOperation for each CRUD operation: Read, Insert, Update, and Delete. Each DatabaseOperation maintains a reference to the corresponding Statement in the Code model. These operations also reference the data structures they interact with and may include parameters. Furthermore, DatabaseOperation elements are connected through the previousDatabaseOperation and nextDatabaseOperation relationships, forming a chain that reflects their execution order within the control flow. A graph traversal algorithm is applied to the Control Flow model to identify all references between data elements and their involvement in database operations. As shown in Algorithm 2, a backward traversal is used to detect operations and data dependencies, while a forward traversal discovers data structures and links each operation to the data it accesses. In Control Flow models, nodes have outgoing edges pointing to target nodes, and incoming edges originating from source nodes, as defined in the Control Flow metamodel (see Figure 6). The source and target references of edges enable the implementation of backward and forward traversals, respectively, as illustrated in Figure 10. In this figure, a blue edge exits from node A to node B via an outgoing edge and a target reference, while a red edge enters node A from node B via an incoming edge and a source reference. Figure 7: Control Flow model for the the running example. Figure 8: Graph excerpt of the Control Flow model for the running example. Before traversing the control flow graph, two preliminary operations are performed. First, a DOSmodel root element is created (line 1). Second, each subgraph — representing either a function or a script) is traversed to identify the Call nodes associated with database operations. These nodes are collected in the ordered list dbCallNodes (line 2), which reflects the execution order of the program. Relevant nodes are identified by matching function names against those defined in the database management API used in the code. It is important to note that in a Control Flow model, each subgraph is connected to the next in the execution sequence through a CallEdge, as shown in Figure 7. The source and target subgraphs may belong to different functions or even to different files. After this initialization, the functions that implement the backward and forward traversals are called (lines 3 and 4). The backwardTraverse() function (lines 7 to 24) iterates over the dboNodes list, which contains the Call nodes related to database operations. For each visited 目 OperationParameter [0..1] updateParamete [0..\*] parameterFields [0..1] targetingParameter [0..\*] previousDBO 目DOSmodel [0..\*] databaseOperations 明DatabaseOperation statement :Statement [0..1] container 目Container [0..\*] nextDBO [0..\*] containers □name:EString 目Delete 目Read 目Insert 目Update [0..\*] dataStructures [0..1] container [0..1] resultDataStructure 目DataStructure [0..\*] insertDataStructure [0..\*] fields [0..1] targetContainer [0..1] targetField [0..1] featuredBy 目Field [0..1]type Type [0..\*]types □name : EString [0.1] duplicatedField 目Aggregate 目Collection 目Attribute 目Reference [0..\*] dataStructure □collectionType :EString □name : EString [0..\*] referenceType Figure 9: Database Operation&Structure Metamodel. Figure 10: Nodes and edges in Control Flow models. node (dboNode), a Read, Insert, Update, or Delete operation is instantiated, depending on the type of database operation (line 9). The created DatabaseOperation instance (dbo) contains a reference to the corresponding Statement in the Code model, which is obtained from the node. Next, the Arguments of dboNode are stored in a variable search list sList (line 11), and the control flow graph is traversed backwards from the source node to the current node (lines 12–13). Each visited node (sNode) is processed as follows: If the node corresponds to a database operation call and the variable receiving the result of that call matches one of the variables in sList (line 14), then the current operation (dbo) is connected to the operation created for sNode (pDBO) via the previousDBO relationship (line 16). In turn, pDBO is connected back to dbo through the nextDBO relationship (line 17). This bidirectional connection indicates a data dependency between the two operations—that is, the output of one serves as input to the other. If the two operations act on different collections, the operation receiving the data is marked as a join query (line 18). It is important to note that the findDbOperation function is responsible for retrieving the database operation call statement (Call) referenced by the visited node (sNode). This function navigates from the Control Flow model to the Code model and then checks whether the call is present in the set of database operations stored in the DOS model. If the visited node sNode does not meet the conditions above, and its associated Statement is an assignment, the algorithm checks whether the variable on the left-hand side is in sList (line 19). If so, the variable on the right-hand side of the assignment is added to sList (line 20), thus continuing the tracking of data dependencies through variable propagation. Once the backward traversal is completed, all database operation Call nodes are revisited using a forward traversal (line 4). In this traversal, only nodes corresponding to Read operations are processed, through the forwardTraverse() function. For each Read node (line 28), a DataStructure is instantiated and aggregated into a Container. Both instances are created if they do not already exist (lines 29 and 30). Additionally, the DataStructure is linked Data: cfModel $\because$ Control Flow Model Result: dosModel : DOS Model 1 dosModel $$ createDOSmodel() 2 dboNodes $$ getDatabaseOpsCallNodes(cfModel) 3 backwardT raverse(dboNodes) 4 forwardT raverse(dboNodes) 5 createReferences(dboNodes) 6 7 Function backwardTraverse(dboNodes): 8 foreach dboNode $\in$ dboNodes do 9 $d b o $ createDatabaseOperation(dboNode) 10 sList ← getArguments(dboNode) 11 12 sNode getP reviousNode(dboNode) 13 while sNode do 14 if isDatabaseOperation(sNode) getReturnV ariable(sNode) ∈ sList then 15 $p D B O f i n d D b O p e r a t i o n ( s N o d e )$ 16 $d b o . p r e v i o u s D B O p D B O$ 17 $p D B O . n e x t D B O d b o$ 18 $m a r k J o i n Q u e r y ( p D B O )$ 19 else $\mathbf { i f } \ i s { A s s i g n m e n t ( s N o d e ) } \ \wedge$ $\jmath e t L e f t V a r i a b l e ( s N o d e ) \in s L i s t$ then 20 $s L i s t . a d d ( g e t R i g h t V a r i a b l e ( s N o d e ) )$ 21 end 22 $s N o d e \gets g e t P r e v i o u s N o d e ( s N o d e )$ 23 end 24 end 25 26 Function forwardTraverse(dboNodes): 27 readNodes $$ getReadsOperations(dboNodes) 28 foreach dboNode $\in$ readNodes do 29 container $$ getOrCreateContainer() 30 ds getOrCreateDataStructure(container) 31 dboNode.result ds 32 sList dboNode.statement.result 33 34 tNode getNextNode(dboNode) 35 while tNode do 36 if tNode.variables $\in$ sList then 37 $f i e l d \gets c r e a t e F i e l d ( t N o d e )$ 38 $f i e l d . t y p e \gets i e r e a t e T y p e ( t N o d e )$ 39 ds.fields.add(field) 40 end 41 $t N o d e \gets g e t N e x t N o d e ( t N o d e )$ 42 end 43 end 44 45 Function createReferences(dboNodes): 46 joinQueries $$ getJoinQueries(dboNodes) 47 foreach dbo $\in$ joinQueries do 48 sF ield findSource(dbo.previousDBO) 49 tF ield ← findT arget(dbo) 50 createReference(sF ield, tF ield) 51 end Algorithm 2: Database Operation and Structure Extraction Algorithm. to the corresponding Read operation (line 31). A new search list is also initialized in this traversal (line 32), which is used to collect the data retrieved by the Read operations. Each subgraph is traversed (lines 34 to 42), starting from the node that follows the database operation node (tNode) (lines 34 and 35). For each visited node, its list of variables is iterated to check which variables are holding values read from the database. To do this, each variable is matched against the elements of sList (line 36). When a match is found, the corresponding Read statement is analyzed to identify the accessed fields of the retrieved object. If a property access is detected, a new Field is created and associated with the current DataStructure (lines 37–39). The type of the Field is determined as follows (line 38): • If another property is accessed from the current property, this indicates that the current property holds an embedded object, and its type is set to Aggregate. • If a collection operation is found, the type of the current property is set to Collection. • Otherwise, the type of the property is set to Attribute —Note that references are not detected during the forward traversal of the graph. It is important to note that the fields of a particular data structure may be discovered at any point during the traversal. As a result, the type of a field may either change or remain undetermined until sufficient context is available. Once the forward traversal is completed, the identification of attributes that are actually references is performed by calling the createReferences function (line 5). This function first collects all Read operations marked as join queries (line 46), that is, those for which the previousDatabaseOperation relationship is not null. This collection is then iterated (lines 47–51), and for each join query, the join condition is analyzed to extract the name of the field involved in the join (lines 48 and 49). A Reference type is then created in the DOS model (line 50). This reference points to the target container and is also associated with the corresponding attribute field previously identified during the forward traversal. Since the corresponding data structures may not yet exist during the initial identification of join queries, reference relationships cannot be extracted at that stage. Finally, the algorithm checks whether multiple data structures contain identical sets of fields. When duplicates are found, only one structure is retained and the others are discarded. Figure 11 shows the DOS model obtained by applying Algorithm 2 to the running example. The DOSmodel aggregates two Read operations —one for each findOne call node— as well as the movies and users containers. Each Read operation is associated with its corresponding data parameter, and each container is linked to a DataStructure that captures the fields of the objects stored within it. The construction of this DOS model proceeds as follows. During the backward traversal, the two Read operations are identified and connected via the next and previous relationships. Since they satisfy the join condition, the second operation is marked accordingly as part of a join query. In the forward traversal, two Container elements are detected, each associated with its respective DataStructure. These containers are identified by inspecting the arguments of the corresponding Read operations. Field detection proceeds in two steps: (i) the name field in the User container and the id field in the Movie container are identified from the first argument of the respective Read operations, and (ii) the remaining fields are inferred from the usage of the result variables in the if-then Selection node. These fields appear in the condition expression and its only branch, as well as in the three console.log Calls that form the body of the block (see Figure 7). The watchedMovies field is identified as a collection (array) containing elements of a third data structure. Its type is inferred as an array of Aggregate objects, each containing the fields stars and movie id. The latter is eventually identified as a reference. Testing. We tested Algorithm 2 using small code snippets structurally similar to the running example. For each snippet, we manually verified that the resulting DOS model correctly represented the Container, DataStructure, Field, and Type elements. We began with a minimal example containing a single Read database operation with one property in the filter. The resulting model was checked to ensure that it included one Read operation and one Container, associated with a DataStructure containing a single Field. Subsequently, we incrementally added statements that manipulated the result object returned by the query. This led to the discovery of additional fields, and we reexamined the model to confirm that these fields were properly captured. The process continued in several iterations by progressively incorporating different types of database operations and accessing up to three distinct containers. At each step, new fields were introduced to increase variability in the data structures and to validate the model’s ability to adapt to structural changes. # 5. Obtaining the Database Schema As explained in Section 2, the DOS metamodel is designed to represent both the set of database operations found in the analyzed code and the structure of the data stored. The structural part of a DOS model captures the database’s physical schema: containers of objects whose data structure consists of a set of fields, which can be attributes, collections, aggregates, or references. From the physical schema, a logical schema can be derived. In our approach, the U-Schema unified metamodel [11] is used to represent the logical schema, which is obtained through a model-to-model transformation. The U-Schema metamodel is depicted in Figure 12. In U-Schema, a schema consists of a set of schema types, which can be either entity types or relationship types. The latter include, for example, many-to-many tables in relational schemas and edges in graph schemas. Each type aggregates one or more structural variations, each composed of a set of features of two kinds: structural and logical. Structural features can be attributes or aggregates, and define the internal structure of database objects. Logical features can be keys or references, and are used to identify attributes that hold identifier values. A more detailed explanation can be found in [11]. Table 1: DOS metamodel to U-Schema metamodel Mappings. Table 1 shows the DOS-to-U-Schema mappings applied to obtain logical schemas. The transformation begins by creating a USchemaModel as the root element. Then, an EntityType is created for each Container element. Next, each DataStructure is mapped to a StructuralVariation of the U-Schema metamodel, which includes a set of features. An entity type may have multiple variations if the code analysis is performed on different versions of the same script or application. Field elements are mapped to U-Schema features as follows: (i) An Attribute feature is created for each Attribute field, preserving its name and primitive type. (ii) A Reference feature is created for each Reference field, which is linked to the EntityType corresponding to the Container specified by the targetContainer property. (iii) Each Composition field is mapped to an embedded (non-root) entity type and an associated Aggregate feature. This process also creates a StructuralVariation for the referenced DataStructure, and is recursively applied to all fields contained within it. (iv) Each Collection field is mapped to an Attribute feature. The collection type is obtained from the collectionType property of the Collection meta-class. Collections may contain either primitive values or embedded objects. Figure 13 shows the U-Schema model derived from the DOS model presented in Figure 11. The schema is visualized using SkiQL, a notation specifically designed for NoSQL schemas [10]. The schema includes the User and Movie entity types (yellow boxes), which correspond to the user and movie Containers. Each entity type aggregates (black dashed arrows) a single StructuralVariation. These variations are derived from the DataStructure associated with each container in the DOS model. A third entity type, WatchedMovie (grey box), represents the embedded data structure within the one associated with the user container. This entity type also includes a single structural variation, which defines a reference to Movie (blue solid arrow). An aggregate relationship (red solid arrow with a diamond) connects User to WatchedMovie. Attribute features are displayed inside the boxes representing the structural variations. Figure 11: Database Operations & Structure Model for the running example. # 6. Finding join removal refactoring candidates A DOS model provides valuable insights that support database practitioners in identifying potential refactorings aimed at improving data quality or query performance. For example, analyzing the number of fields in each entity type can help detect overloaded entities that might benefit from being decomposed into smaller, more cohesive structures. In addition, the presence of join queries may reveal opportunities to group related data into aggregates, thereby avoiding time-consuming joins between separate containers and improving query performance. To illustrate how our approach enables the detection and application of refactorings, we focus on the join query removal refactoring, previously introduced in Section 2. Applying this refactoring requires the following elements: the join queries, the two involved entity types, and the set of fields to be duplicated. Notably, this last element is the only one not directly represented in the DOS model. The remainder of this section describes how such information is identified. We refer to the data provided to practitioners to support this refactoring as a join removal plan. Algorithm 2 identifies and marks the Read operations that correspond to join queries (line 12), as described in Section 4. Therefore, the main objective of the next analysis step is to determine which fields should be duplicated for each join query present in the DOS model. These fields are discovered by analyzing the statements that appear after the join query in the control flow. In particular, the algorithm looks for variable access statements in which the result variable of the join query is accessed via dot notation to retrieve specific fields from the data structure associated with the referenced container. It is important to note that the result variables of both queries involved in a join typically appear within the same CodeBlock, facilitating this type of data dependency analysis. Algorithm 3 identifies and selects the data to be duplicated in order to eliminate a join query. It proceeds as follows. First, the set of join queries (Read operations) is collected—specifically, those with at least one prevDatabaseOp relationship (line 1). Next, two variable search lists are initialized (lines 2 and 3). For each join query, the result variable is added to joinSList (line 7), while the result variable of the preceding query is added to prevSList (line 8). Figure 12: U-Schema Data Model. At this point, the function findFollowingNode retrieves the node from the control flow model that corresponds to the current join query and returns its immediate successor (line 10). A forward traversal is then performed to analyze the variables used in the statements that follow in the control flow (lines 11 to 23). Each visited node is inspected to check whether any of its variables appear in both the joinSList and prevSList search lists (line 12). This condition is satisfied when the result of the join query is used alongside the result of the preceding query. In such cases, the fields accessed from the join query’s result variable are considered candidates for duplication if the join is to be removed. When this condition holds, the DOS model is updated as follows. First, the fields accessed in the current statement are obtained from the corresponding node in the Control Flow model (line 13). Then, each corresponding Field is copied (line 14) and assigned to the result of the preceding Read operation (line 15)—that is, to the DataStructure of the object retrieved by the prior database operation. The copied Field maintains a reference to the original. During the traversal, if an assignment is detected in which the result of a database operation is assigned to another variable (line 17), the new variable is added to joinSList (line 18). The same logic is applied to the result of the preceding operation (lines 20–21). This process is repeated for all database operations. When the algorithm is applied to the DOS model of the running example, the join query list includes a single Read operation, as shown in Figure 11. This operation is processed as follows. The result variable of the join query (movie) is added Figure 13: U-Schema model obtained from code analysis for the running example. Data: cfModel $\because$ Control Flow Model Data: dosModel $\because$ DOS Model Result: dosSModel : DOS Model 1 joins $$ getJoinQueries(dosModel) 2 joinSList 3 prevSList 4 5 foreach join $\in$ joins do 6 $p D B O d b o . p r e v D B O$ 7 joinSList.add(getResultV ariable(join)) 8 prevSList.add(getResultV ariable(pDBO)) 9 10 node ← findF ollowingNode(cfModel, join) 11 while node do 12 if node.variables $\in$ joinSList node.variables $\in$ prevSList then 13 fields getF ields(dbo, node.variables) 14 joinQueryF ields copyF ields(fields) 15 pDBO.resultDS.add(joinQueryF ields) 16 end 17 if isAssignment(node, joinSList) then 18 joinSList.add(node.variables) 19 end 20 if isAssignment(node, prevSList) then 21 prevSList.add(node.variables) 22 end 23 node ← getF ollowingNode(node) 24 end 25 end Algorithm 3: Field Duplication Detection Algorit to joinSList, and the result variable of the previous query (user) is added to prevSList. The control flow traversal then starts from the node returned by the findFollowingNode function, which returns the node corresponding to the if-then statement labeled Selection in Figure 7. variables in the search lists. Specifically, the algorithm checks whether the variables user (in prevSList) and movie (in joinSList) are used together within the same Statement or CodeBlock. It visits the three Call nodes representing the console.log statements in the control flow model and finds that both variables are used together in the last call. At this point, the title field is identified as a candidate for duplication, as the expression movie.title is detected. As a result, the title field is copied and added to the DataStructure that contains the movie id field involved in the join condition. The newly created field is marked as duplicated and maintains a reference to the original Field in Movie. Subsequent nodes are traversed to detect the usage of Once the fields to be duplicated are identified and the DOS model is updated, the model contains all the information required for a database practitioner to decide whether a particular join query should be removed. A dedicated application could be developed to display this information in the form of join removal plans, allowing users to select which refactorings are automatically applied. In the case of a join removal refactoring, each plan would include the following: the join query and related operations, the source and target entity types, the fields from the target to be duplicated in the source, and both the original and a rewritten version of the affected code. When a user selects a database refactoring, a schema change must be performed. This requires updating the logical schema, the database contents, and the application code. We have implemented the automatic update of both the database and the code, as described in the following sections. Updating database. We have used the Orion language [17] to update both the schema and the stored data. Orion is a generic schema evolution language for NoSQL and relational databases, defined for the U-Schema metamodel. From each refactoring plan, an Orion operation is automatically generated. For example, a COPY operation could be used for the field duplication required by a join removal refactoring. This operation copies one or more fields from a source entity type to a target entity type. In the case of our running example, it would be expressed as: COPY Movies:: title, director TO Users::watchedMovies movie id WHERE movie id $\mathbf { \sigma } = \mathbf { \sigma }$ id. Each Orion operation updates the U-Schema model via a model-to-model transformation, and code updating stored data is generated via a model-to-text transformation, which is specific for each database platform. By using Orion, we ensure that schema refactorings are applied consistently across both the schema and data layers. Updating code. Code is updated through a two-step process. First, a model-to-model transformation is applied to update the Code model. This transformation takes both the DOS and Code models as input and produces a modified Code model as output. The DOS model provides the necessary information: the data to be duplicated for each join query and the Read operations that can be removed. Accordingly, the identified join queries are eliminated, and the corresponding code expressions where the result variable was used are replaced. Specifically, the CodeBlock from the second query is moved into the first one. Then, all occurrences of the original result variable are replaced by the result variable of the preceding query, followed by access to the newly duplicated field. If the duplicated field belongs to an aggregate, the replacement must include access through the corresponding Aggregate field, as illustrated in the running example. It is convenient to note that both deletion and replacement operations can be performed directly on the Code model, since it is cross-referenced by the database operations defined in the DOS model. In the second step, the updated Code model is traversed to regenerate the source code. This traversal had already been implemented as part of the testing process for Code model injection, as described in Section 3. In the running example, the second query is identified as a join query, as indicated in the DOS model shown in Figure 11. This query uses the user result variable produced by the preceding query, as reflected in the Code model in Figure 2. If the join query is removed, the expression movie.title is replaced with user.watchedMovies [0].movie title, according to the duplication logic previously described. Testing. To test the correctness of the code rewriting process, we compiled the transformed code using a standard JavaScript compiler and executed it against the modified database produced by the Orion engine. This ensured that the generated code was both syntactically valid and executable, and that the resulting queries produced the expected outputs when run on the refactored database. # 7. Validation A testing strategy has been applied to each step of the reverse engineering process described in the previous sections. In this section, we present the validation of the complete code analysis process. The input is JavaScript code that manipulates a MongoDB database, and the output is the inferred database schema along with a list of join removal plans. We have also considered the application of schema change operations for join removals that were assumed to be selected by practitioners, in order to simulate a realistic refactoring scenario. First, we describe the experimental design and the metho ology followed for the evaluation. Then, we present the results, and finally, we discuss the limitations of our validation. # 7.1. Experiment: Description and Methodology The validation was carried out through a round-trip experiment. Since publicly accessible schema and data are rarely available for non-trivial open-source MongoDB applications, we had to generate our own test environment. To address this, we generated all components of the test environment. The database schema (a U-Schema model) was created using the Athena language [16]; the dataset was generated with the data generator Deimos [15]. Both Athena and Deimos are tools specifically designed for working with U-Schema models. The application code was produced using an LLM, which was provided with the schema definition in Athena notation. The generated code simulated the backend logic of a small music streaming service, and is available on the git repository alongside the complete implementation of the approach. Having the schema defined in advance facilitated the verification of the code analysis results. We were able to compare the U-Schema model obtained through our approach with both the original schema defined in Athena and the one inferred from the data using the approach presented in [11]. The schema designed for validation, shown in Figure 14, defines five entity types: Album, Track, Artist, Rating, and Genre. Artist references zero or more Albums and Tracks. Album references one or more Tracks—this reference is named songs instead of tracks to demonstrate that our approach does not rely on name-based heuristics and can still accurately detect references. Similarly, both Album and Track reference zero or more Genres; however, the reference is named categories in Album and genres in Track. Additionally, Album and Track aggregate zero or one Rating. Thus, Rating is an embedded entity, while the rest are root entities. As previously noted, the detection of structural variations through code analysis is inherently complex. Therefore, we defined only one structural variation per entity type. The attributes of each entity type can be seen in Figure 14. The schema is presented using the SkiQL notation [10] as a U-Schema model. Based on the schema shown in Figure 14, we generated a Node.js-based backend with the assistance of ChatGPT4o, an LLM developed by OpenAI. The backend includes a REST API built with Express.js, with data stored in MongoDB and accessed via MongoDB native API $^ { 1 0 }$ . Each endpoint of the REST API performs at least one database operation. The domain model defines a class for each entity type in the schema and incorporates various structural relationships, including one-to-many and many-to-many references, as well as embedded documents. The codebase features realistic query patterns involving nested dereferencing and data aggregation, which were used to validate schema inference and the identification and application of refactorings such as join removal. In particular, several methods simulate join-like behavior by executing multiple dependent queries to resolve references across collections. To populate the database, we created the collections artist, album, track, and genre to store the corresponding documents. Using our synthetic dataset generator, Figure 14: Validation designed schema. Deimos, we generated a representative and sufficiently diverse set of instances per entity type: 10 artists, 20 albums, 60 tracks, 8 genres, and 60 ratings. This configuration ensures a representative level of variability across relationships, such as tracks with multiple genres, albums and tracks with or without ratings, and nested access patterns through artist and album references. Although the dataset is limited in size, this does not affect the validity or generalizability of our results. Our approach is based on static code analysis, and therefore operates independently of the volume of data stored in the database. What matters for schema extraction and refactoring detection is the structure and usage of the application code—not the number of records. Once the relevant code access patterns are present, the size of the dataset has no influence on the models generated. This makes our validation both efficient and reproducible, without compromising its effectiveness in demonstrating the correctness of the analysis pipeline. Next, we ran our code analysis solution using the generated application code as input. We first compared the inferred U-Schema model with the schema originally defined, and then with the schema inferred from the dataset using our data-based approach presented in [11]. These comparisons allowed us to assess the accuracy of our solution and evaluate how it compares with schema inference techniques based solely on stored data. Regarding the join removal refactoring, we manually verified that all join queries present in the application code were correctly detected. Each selected join removal was then applied, and we ensured that the schema, data, and source code were properly updated. To verify the correctness of the data updates, we checked that the duplicated fields were correctly inserted into the appropriate documents. To validate schema updates, we modified the original schema by adding the expected duplicated fields. Specifically, for each entity type $t _ { 1 }$ referencing another entity $t _ { 2 }$ , we added to $t _ { 1 }$ the fields from $t _ { 2 }$ that were selected for duplication. We then compared this manually updated schema with the one obtained after applying the automated refactoring. To quantify the scope of the analysis, we examined all route handlers and application logic functions included in the backend. We identified a total of 28 database-access methods distributed across five source files, covering both REST API endpoints and internal application logic. The backend implements full CRUD support for four entity types—Artist, Album, Track, and Genre—using the native MongoDB API. Additionally, the insert and update methods include minimal input validation, such as checks for required fields and basic type constraints. These validations reflect common practices in NoSQL applications, where the database does not enforce schema constraints, and data integrity must be ensured at the application level through presence checks and type validation. The codebase also includes six join query candidates, corresponding to references in the domain schema. These cases were manually reviewed and used to validate the ability of our solution to detect join-like access patterns and generate appropriate refactoring plans for each of them. # 7.2. Results Our solution successfully detected the 28 database operations present in the backend code, including those performed using the aggregate operator. The database schema was largely inferred correctly and aligned with the predefined schema used for validation, successfully identifying all entity types, attributes, references, and aggregations. While the overall structure was accurately identified, certain data types could not be determined due to JavaScript’s dynamic typing, where type information is not always explicitly available in the code. Furthermore, all references and aggregations were inferred with a lower cardinality of 0. Determining lower-bound cardinalities from code is not feasible, as this information is often not present and is rarely enforced explicitly in real-world application logic. As expected, the schema inferred from code analysis closely matched —but did not exactly match —the one obtained through data analysis [11]. While data-driven approaches can detect all data types, infer lower-bound cardinalities, and identify structural variability, their heuristic-based techniques for detecting references do not guarantee full accuracy. In contrast, our code analysis approach was able to detect references precisely, overcoming the limitations of purely heuristic inference. For example, as mentioned earlier, we deliberately renamed the references from artists to tracks as songs and from album to Genre as categories, in order to expose the limitations of name-based heuristics. This reference was correctly detected by our code analysis, but was missed by the data-based approach. Seven database operations were detected as join query candidates. Three of them follow a sequential (nested) pattern, in which the result of one query is used in a subsequent query to simulate a join. The remaining four use MongoDB’s aggregate operator to perform joins directly within the database engine. As a result, a total of eight individual queries were involved in the execution of the seven detected joins. For each detected join, our approach successfully generated a join removal plan, correctly identifying the fields to be duplicated. Table 2 summarizes the join removal plans identified through our code analysis. Each row describes a join query detected in the codebase, detailing the source and target entity types of the join query, the fields proposed for duplication, where the duplication would be applied, and the type of join mechanism originally employed, whether through a sequential (nested) query or MongoDB’s aggregation pipeline. For instance, Query 1 retrieves a specific artist by name and then fetches all their albums using the retrieved artist information. The duplication plan suggests copying the album titles directly into the artist entity as an array to eliminate the need for repeated join operations when this information is required. Concretely, the analysis proposed duplicating the title field from Album into Artist (Query 1), and from Track into Artist (Query 2), allowing each artist document to include the titles of their associated albums and tracks. The name attribute of the Artist entity was also proposed for duplication into Album (Query 3). Additionally, the title of Track was suggested for duplication into Album (Query 4), and the name attribute of Genre into both Album and Track (Queries 5 and 7). Notably, Query 6 involves two join operations—one between Track and Album, and another between Track and Artist. As a result, two duplication plans are proposed: one to copy title and releaseYear from Album into Track, and another to duplicate name from Artist into Track. In such cases, when multiple fields from the same referenced entity are duplicated, they are grouped into an embedded object to preserve the semantics of the original data structure. In such cases, when multiple fields of a referenced entity are proposed for duplication, they are grouped into an embedded object to preserve the semantic integrity of the original structure. # 7.3. Limitations of the validation The schema used in our validation is not overly complex; however, it was intentionally designed to include all core elements of the U-Schema metamodel, as well as common modeling constructs and practices typically found in real-world schemas. Certain aspects were excluded from this study, such as self-references, while others—like nested queries—were included with limitations (e.g., restricted to two levels of nesting). It is also worth noting that increasing the number of entity types or operations would not necessarily enhance the robustness of the validation. Doing so would mainly replicate the same evaluation logic across more collections, without introducing fundamentally new challenges. Although the number of schema changes required to perform join removal refactorings was relatively small, they were sufficient to demonstrate that the algorithm can accurately identify and apply field duplication when needed. Finally, since the codebase used for validation was automatically generated, it may not fully reflect the diversity of patterns and idioms present in real-world applications. Future work should explore the application of this \*Query 6 includes two join operations in the same query, resulting in two duplication plans. Table 2: Detected join removal plans based on code analysis. approach to existing systems with richer development histories and evolving schemas. # 8. Related Work Most of the research work conducted to date —as well as the available NoSQL tooling— has addressed the problem of schema inference by analyzing stored data, while the analysis of application code has received comparatively little attention. In this section, we contrast the static code analysis strategy proposed in this paper with the most relevant approaches to schema extraction based on both data and code analysis. We also discuss the rationale behind defining a new metamodel to represent objectoriented code, despite the existence of other metamodels for this purpose. We begin by reviewing existing proposals, grouped into three categories: data-driven strategies, code analysis approaches, and metamodels. Data-driven Schema Inference. A framework for schema discovery in document stores is presented in [27]. Based on document parsing, the approach infers a schema represented as a tree structure, capturing entities and their structural variations. The framework also includes a simple query language and a visualization mechanism to display all variations of an entity in a simplified format. The approach was validated by using several real datasets. Another approach focused on document stores is described in [19], where the authors propose an algorithm to extract structural information from JSON data. Instead of analyzing the entire collection, a representative subset of documents is retrieved from the database. A graph is incrementally built to capture the structural features of these documents. Once the process is complete, the graph represents the union of all identified structural variations, from which a JSON Schema is generated. Schema inference from graph stores is addressed in [8], where a model-driven reverse engineering approach is applied to analyze CREATE statements written in Neo4j’s Cypher query language. This analysis enables the extraction of node types, relationships, and their properties, which are then represented in a graph-based metamodel. The resulting model is subsequently transformed into an Extended Entity-Relationship (EER) schema for conceptual modeling purposes. A proposal for schema extraction in columnar stores specifically HBase—is presented in [13], where JSON Schema is used to represent the inferred schema. The main challenge addressed is the inference of data types from byte arrays, which are the default storage format in HBase. Their method recursively analyzes the database content and applies a set of inference rules to identify data types. To validate the approach, the authors developed a publicly available tool prototype called HBase Schema Inference (HBaSI). U-Schema was proposed by [11] as a unified metamodel capable of representing logical schemas for both relational and NoSQL databases, including document, key–value, columnar, and graph models. The authors defined canonical forward and reverse mappings between U-Schema and each supported data model. A common strategy was established for implementing and validating the schema extraction process across all five types of databases. For validation, synthetic data were generated to populate the databases, and a four-step round-trip experiment was conducted. The main advantages of the U-Schema-based approach include its independence from any specific data model and its ability to extract schemas that capture both reference and aggregation relationships, as well as structural variations. Moreover, the resulting schema is a model that conforms to an Ecore/EMF metamodel [25], which enables the use of EMF tooling to build database utilities and facilitates interoperability with other tools, as discussed in [22]. As indicated in Section 1, our code analysis approach is complementary to the data-driven strategy proposed in [11]. While analyzing application code allows reference relationships to be directly identified—without relying on heuristics, which may not always be reliable—structural variability is more effectively discovered through data analysis. In contrast, detecting schema variations through code requires access to multiple versions of the application over time. The other advantages discussed above —such as being a generic approach, extracting schemas with references and aggregation, and producing models conforming to Ecore— also apply to our code analysis solution. Code analysis of NoSQL applications. In [20], the authors present an approach to support the evolution of a schemaless NoSQL data store by analyzing both the application source code and its version history. The method involves locating database queries within the code and analyzing their arguments and return values in order to infer collections, fields, types, and references between data entities. By applying this analysis across multiple versions of the application, the authors reconstruct a historical database schema that captures all properties that have existed over time, including their types, potential renamings, and the dates of their introduction or removal. This historical schema is visualized in tabular form, using colors and icons to highlight potential data quality issues—such as inconsistencies or deprecated fields—and to warn developers about renamed properties or collections. In contrast, our approach uses metamodel-based representations of code, supports multiple programming languages, introduces novel analysis algorithms, and leverages the extracted intermediate models to suggest and apply database refactorings. Detecting MongoDB access operations is addressed in [5] where the CodeQL language is used to declaratively query JavaScript code. The proposed approach combines structural code analysis with heuristic rules to deal with the dynamic nature of JavaScript. Their technique achieves a precision of 78% in identifying database interactions and focuses on locating access points such as queries, insertions, updates, and deletions across diverse codebases. In contrast, our work focuses on identifying both database access operations and the associated data structures, with the goal of extracting the logical schema using a data model–independent strategy. Rather than relying on CodeQL, we define a custom transformation pipeline based on intermediate metamodels, which enables schema inference and additional tasks such as database refactoring. A taxonomy of code smells specific to MongoDB interactions in Java applications is defined in [18], where CodeQL-based static analysis techniques are used to detect the patterns included in the taxonomy. The approach is implemented in a tool capable of identifying common anti-patterns in real-world projects, helping developers detect poor coding practices related to NoSQL database ac cess. Finally, an extension of the Orion engine is presented in [6], aimed at supporting code co-evolution when the database schema changes. Recall that Orion is a generic schema evolution language defined on top of U-Schema, as described in [7]. For each operation in the Orion taxonomy, a model-to-text transformation is implemented. This transformation takes as input an Orion model —obtained by injecting Orion scripts— and generates corresponding CodeQL queries to identify and modify the affected parts of the application code. Depending on the scenario, changes can be applied automatically or developerfacing suggestions can be generated. Although our work focuses on schema inference from code rather than code adaptation, both approaches share the use of U-Schema, code analysis, and MDE techniques. The approach presented in this paper could serve as a front-end for extracting schema information prior to applying Orion operations, particularly in the context of schema-less NoSQL stores. Code Metamodels. Several metamodels have been proposed to represent the structure and behavior of source code in a language-independent manner. These metamodels are widely used in static analysis, reverse engineering, and software modernization tasks, as they provide an abstract representation of code elements such as classes, methods, variables, and control flow. In the following, we briefly describe two of the most relevant metamodels —KDM and MoDisco— which have influenced the design of our Code metamodel and are widely recognized in the context of model-driven reverse engineering. KDM is a specification defined by the Object Management Group [21]. It is a comprehensive metamodel composed of multiple packages that enable the representation of various aspects of software systems, ranging from source code to physical deployment. KDM is designed to support a wide variety of programming languages, and it provides support to be extended in order to capture languagespecific constructs. While our Code metamodel was inspired by KDM, it is intentionally kept simpler and focused on representing core object-oriented constructs that are sufficient for analyzing applications with intensive database interaction. MoDisco [4] is a model-driven reverse engineering framework developed as an open-source Eclipse project. It was designed to extract information from legacy applications to support their understanding, maintenance, and migration. Although initially focused on Java, MoDisco has been extended to support other languages such as C#, and includes several Discoverers to facilitate the injection of source code into models. Its comprehensive Java metamodel influenced the design of our Code metamodel, particularly in the representation of structural elements commonly found in object-oriented languages.
In this paper, we present a static code analysis strategy to extract logical schemas from NoSQL applications. Our solution is based on a model-driven reverse engineering process composed of a chain of platform-independent model transformations. The extracted schema conforms to the \uschema{} unified metamodel, which can represent both NoSQL and relational schemas. To support this process, we define a metamodel capable of representing the core elements of object-oriented languages. Application code is first injected into a code model, from which a control flow model is derived. This, in turn, enables the generation of a model representing both data access operations and the structure of stored data. From these models, the \uschema{} logical schema is inferred. Additionally, the extracted information can be used to identify refactoring opportunities. We illustrate this capability through the detection of join-like query patterns and the automated application of field duplication strategies to eliminate expensive joins. All stages of the process are described in detail, and the approach is validated through a round-trip experiment in which a application using a MongoDB store is automatically generated from a predefined schema. The inferred schema is then compared to the original to assess the accuracy of the extraction process.
[ "cs.DB" ]
# 1. Introduction In statistical learning theory, the probably approximately correct (PAC) framework (Valiant, 1984) is central to understanding binary classification learnability. A key result shows that PAC learnability is fully determined by the VC dimension (Vapnik and Chervonenkis, 1974; Blumer et al., 1989), elegantly linking learnability and sample complexity. Similar characterizations exist for widely diverse variants of statistical and online learning (e.g., Bartlett et al., 1994; Littlestone, 1988). The appeal of combinatorial characterizations is often in their simplicity, reducing learnability to a single parameter, and offering useful insights into algorithm design and problem structure. In contrast, a similarly tight characterization of bandit learning, and in particular of the problem known as best-arm identification, is still lacking. In this setting, there is a set of actions (or arms) $\mathcal { A }$ , and an unknown reward function $f ^ { * } : { \mathcal { A } } [ 0 , 1 ]$ . A learner repeatedly queries actions $a \in { \mathcal { A } }$ and observes their corresponding reward, a random variable with mean $f ^ { * } ( a )$ . The aim of a learning algorithm is to identify a near-optimal action using as few queries as possible. Analogous to classic learning settings, one may assume the rewards are realizable by a known, but arbitrary, class of reward functions ${ \mathcal { F } } \subseteq [ 0 , 1 ] ^ { A }$ . A key focus of study in this context is the optimal query complexity associated with a given class. The pursuit of VC-dimension-like parameters for bandit learning has drawn considerable attention (Amin et al., 2011; Russo and Van Roy, 2013; Foster et al., 2021; Brukhim et al., 2023; Foster et al., 2023; Hanneke and Wang, 2024). However, existing parameters are often non-combinatorial in nature, rather complex, and in the general case exhibit substantial gaps between upper and lower bounds (see Section 1.1 for further discussion). We start our investigation by asking whether there exists a combinatorial characterization of bandit learning. Somewhat disappointingly, we prove that no such characterization of bandit learnability exists. Specifically, we use the definition of a combinatorial dimension introduced by Ben-David et al. (2019) that encompasses all standard notions of dimension in both statistical and online learning. Using a rather simple argument, we demonstrate that no such dimension can universally characterize bandit learnability, even for finite classes. We then shift our focus to exploring algorithmic approaches to the problem. Specifically, we examine reward function classes with small optimal query complexity and seek a general algorithmic principle that achieves it, guided by the question: When is a class $\mathcal { F }$ of a bounded query complexity, efficiently bandit-learnable? There are several algorithmic oracle assumptions commonly considered in the context of computational efficiency. For example, in statistical learning theory, the gold standard is the simple empirical risk minimization (ERM) principle which determines that it suffices to find any function in the class that is consistent with past observations. In interactive settings, an estimation algorithm is often used both to find a consistent function and to produce future predictions (see, e.g., Foster et al., 2023; Brukhim et al., 2023). One might assume that a class which admits efficient algorithms as above might also be efficiently bandit-learnable. Interestingly, we prove a hardness result showing that is not the case. Specifically, we construct a reward function class for which at most two queries are needed to find the optimal action, yet no algorithm can do so in polynomial time, unless $R P = N P$ . Moreover, we prove that this class admits efficient algorithms to the aforementioned tasks, demonstrating that the hardness is inherent to the bandit setting. An important aspect of bandit learnability is the noise model being considered. In the absence of noise, learning is less constrained and is therefore simpler. However, it is also more brittle, as it relies heavily on the precise function values that define the structure of the class $\mathcal { F }$ . In contrast, under sufficiently noisy conditions, bandit learnability exhibits a form of robustness, allowing it to be characterized by simple parameters and algorithms, as shown in recent work by Hanneke and Wang (2024). However, while some works (Hanneke and Yang, 2023; Amin et al., 2011) focused on the noise-free regime, Hanneke and Wang (2024) considered a highly complex family of distributions of arbitrary noise, leaving intermediate noise regimes largely unaddressed (see further discussion in Section 1.1). In this work, we partially address this gap and examine the effect of noise on the query complexity of bandit learning. We focus on a Gaussian noise model and study the relationship between the noise variance and the query complexity. For instance, we show that certain function classes have a query complexity of 1 when $\sigma = 0$ but become unlearnable (i.e., infinite query complexity) when $\sigma = 1$ . Moreover, we identify an upper bound $\bar { \sigma }$ on $\sigma$ such that, for any function class, the query complexity for any $\sigma \leqslant \bar { \sigma }$ is upper bounded by the query complexity for $\sigma = 0$ . This observation implies that the query complexity in the low-noise regime can be captured by the noise-free setting. Additionally, we prove that for a specific family of function classes, there exist class-dependent thresholds for $\sigma$ , that separate distinct learning regimes. Above a certain noise level, the query complexity is governed by a simple parameter $\gamma$ known as the generalized maximin volume, introduced by Hanneke and Wang (2024). Below a different threshold the query complexity is 1, exhibiting a large gap from $\gamma$ . Understanding the broader interplay between noise variance and query complexity across arbitrary function classes remains an open and interesting direction for future research. Finally, we examine an alternative notion of learnability in bandits via the lens of regret minimization and study its relationship with query complexity of best-arm identification. Specifically, we prove that any algorithm which achieves the optimal query complexity $d _ { \cdot }$ , must also incur regret that is linear in $d$ , and is not regret-optimal for time horizon $T = O ( d )$ . This result establishes that no single algorithm can simultaneously achieve both optimal query complexity and optimal regret. # 1.1. Related work The PAC framework and related combinatorial characterizations have played a crucial role in providing quantitative insights into learnability across statistical learning theory. However, bandit learning, particularly best-arm identification (BAI), lacks a unifying framework and remains largely a collection of case-specific analyses (see, e.g., Bubeck et al., 2012, and references within). Moreover, most prior BAI work (e.g., Garivier and Kaufmann, 2016; Kaufmann et al., 2016) assume that the mean rewards lie in some fixed bounded product space, e.g., $\mathcal { F } = [ 0 , 1 ] ^ { K }$ , and so pulling one of the $K$ arms provides no information about others. In contrast, the focus of this work is the setting in which observations can possibly reveal additional information, based on the structure of the class $\mathcal { F } \subsetneq [ 0 , 1 ] ^ { K }$ . Indeed, the approach of studying the structure of the class itself has gained attention in recent years (Foster et al., 2021, 2023; Hanneke and Yang, 2023; Hanneke and Wang, 2024). A notable proposed parameter for capturing interactive decision making is the decision-estimation coefficient (DEC) (Foster et al., 2021, 2023). However, it suffers from arbitrarily large gaps between upper and lower bounds (Foster et al., 2023) and fails to characterize learnability in stochastic bandits (see Hanneke and Wang, 2024). More recently, Hanneke and Wang (2024) introduced a characterization for stochastic bandits with arbitrary noise, but it exhibits an exponential gap between upper and lower bounds and does not seamlessly extend to standard noise models, e.g., Gaussian noise. In Section 5, we further analyze their generalized maximin volume parameter, showing that under moderate-variance Gaussian noise, it can diverge arbitrarily from the optimal query complexity. Finally, we establish that no combinatorial dimension fully characterizes bandit learnability. While Hanneke and Yang (2023) demonstrated a related result using complex set-theoretic arguments, their proof relies on the cardinality of the continuum and does not directly address combinatorial dimensions. In contrast, we provide a rather simple, direct argument showing that no such dimension exists, within the standard model of set theory, without any additional assumptions. # 2. Query complexity of bandit learning In this work, we study query complexity of bandit learning. Specifically, we focus on the following problem. Let $\mathcal { A }$ be an action set, $\mathcal { F }$ a set of reward functions $f : \mathcal { A } [ 0 , 1 ]$ , and $f ^ { * } \in { \mathcal { F } }$ the target reward function. In each round $t = 1 , \dots , T$ , the learner queries an action $a _ { t } \in { \mathcal { A } }$ and receives reward $r _ { t } \in [ 0 , 1 ]$ with $\mathbb { E } [ r _ { t } | a _ { t } ] = f ^ { * } ( a _ { t } )$ . The goal is best-arm identification: for a given $\epsilon \in [ 0 , 1 ]$ , using as few queries as possible, identify an $\epsilon$ -optimal action. We consider both the noise-free setting, where $r _ { t } ~ = ~ f ^ { * } ( a _ { t } )$ , and the noisy setting, where in each round $t = 1 , . . . , T$ , the learner observes $r _ { t } = f ^ { * } ( a _ { t } ) + \xi$ for some zero-mean random variable $\xi$ . Throughout the paper, unless stated otherwise, we will assume a Gaussian noise model, i.e., $\xi \sim \mathcal { N } ( 0 , \sigma ^ { 2 } )$ . We say that a class of reward functions ${ \mathcal { F } } \subseteq [ 0 , 1 ] ^ { A }$ is bandit-learnable if there is a (possiblyrandomized) algorithm Alg and a function $m : ( 0 , 1 ) ^ { 2 } \to \mathbb { N }$ such that for any $f \in { \mathcal { F } }$ , when given any $\epsilon , \delta > 0$ and after having made at most $m ( \epsilon , \delta )$ queries $a _ { t }$ to $f$ and observed $\boldsymbol { r } _ { t }$ (under the appropriate noise model), algorithm Alg outputs $\hat { a }$ such that with probability at least $1 - \delta$ , $$ f ( \hat { a } ) \geqslant \operatorname* { s u p } _ { a \in \mathcal { A } } f ( a ) - \epsilon . $$ The function $m ( \cdot , \cdot )$ is the query complexity of Alg. We often denote $m _ { \mathrm { A l g } } ^ { \sigma } ( \cdot , \cdot )$ when considering noisy feedback, for the appropriate choice of $\sigma$ . We then define the query complexity of a given class $\mathcal { F }$ , for any fixed choice of parameters, as follows. Definition 1 Given $\epsilon , \delta \in [ 0 , 1 ] ,$ , the $( \epsilon , \delta )$ -query complexity for class ${ \mathcal { F } } \subseteq [ 0 , 1 ] ^ { A }$ under a Gaussian noise model with $\xi \sim \mathcal { N } ( 0 , \sigma ^ { 2 } )$ , denoted $\mathrm { Q C } _ { \epsilon , \delta } ^ { \sigma } ( \mathcal { F } )$ , is the minimum over all $m _ { \mathrm { A l g } } ^ { \sigma } ( \epsilon , \delta )$ , where $m _ { \mathrm { A l g } }$ is the query complexity of a bandit learning algorithm Alg for the class $\mathcal { F }$ . # 3. No combinatorial dimension can characterize bandit learnability A fundamental result of statistical learning theory is the characterization of PAC learnability in terms of the VC dimension of a class. Similar combinatorial characterizations exist for diverse variants of statistical learning (Vapnik, 1989; Natarajan and Tadepalli, 1988; Bartlett et al., 1994; Ben-David et al., 1992; Brukhim et al., 2022) as well as online learning (Littlestone, 1988; Ben-David et al., 2009; Rakhlin et al., 2015; Daniely et al., 2015). All standard notions of dimension in the aforementioned learning settings can be abstracted as a function $\mathfrak { D }$ that maps a class of functions $\mathcal { F } \subseteq \mathcal { V } ^ { \mathcal { X } }$ to $\mathbb { N } \cup \left\{ \infty \right\}$ , while satisfying the following requirements: (1) learnability characterization: a class $\mathcal { F }$ is learnable if and only if ${ \mathfrak { D } } ( { \mathcal { F } } ) < \infty$ , and (2) finite character: for every integer $d$ and $\mathcal { F }$ , the statement ${ \mathfrak { s o } } ( { \mathcal { F } } ) \geqslant d ^ { \mathfrak { s } }$ can be demonstrated by a finite set of domain points and a finite collection of members of $\mathcal { F }$ . We will next give a more formal definition of the finite character property. First, we define the notion of a shattered set. In the definition and throughout this section, we write ${ \mathcal { F } } | _ { X }$ to denote the set of all functions in $\mathcal { F }$ restricted to points in $X$ . Definition 2 (Shattered sets) For every $d \in \mathbb { N }$ let $V _ { d } : \mathcal { X } ^ { d } \times 2 ^ { \mathcal { Y } ^ { d } } \mapsto \left\{ Y E S , N O \right\}$ be a shattering function. A set $X \in \mathcal { X } ^ { d }$ is shattered by hypothesis class $\mathcal { F }$ , with respect to $V _ { d }$ if and only if ${ \mathcal { F } } | _ { X }$ is of finite cardinality and $V _ { d } ( X , { \mathcal { F } } | _ { X } ) = Y E S$ . Definition 3 (Finite character property) We say that a dimension $\mathfrak { D }$ satisfies the finite character property if for every $d \in \mathbb { N }$ there exists a shattering function $V _ { d }$ such that ${ \mathfrak { D } } ( { \mathcal { F } } ) \geqslant d$ if and only if there exists a shattered set of size at least $d .$ . This property was first defined by Ben-David et al. (2019), who gave a formal definition of the notion of “combinatorial dimension” or “complexity measure”, satisfied by all previously proposed dimensions in statistical learning theory. The intuition is that a finite character property can be checked by probing finitely many elements of $\chi$ and $\mathcal { F }$ . For example, the classic VC dimension (Vapnik and Chervonenkis, 1974; Vapnik, 1989) satisfies the finite character property since the statement $\begin{array} { r } { \mathrm { \nabla \cdot V C } ( \mathcal { F } ) \geqslant d ^ { \mathfrak { s } } } \end{array}$ can be verified with a finite set of points $X = \{ x _ { 1 } \ldots , x _ { d } \} \subseteq \mathcal { X }$ and a finite set of classifiers $h _ { 1 } , \ldots , h _ { 2 ^ { d } } \in { \mathcal { F } } | _ { X }$ that shatter $X$ . In a similar manner to the statistical setting, a dimension capturing bandit learnability can be abstracted as a function $\mathfrak { D }$ that maps a class of reward functions $\mathcal { F } \subseteq \mathcal { V } ^ { \mathcal { X } }$ to $\mathbb { N } \cup \left\{ \infty \right\}$ . We say the dimension $\mathfrak { D }$ satisfies the finite character property if Definition 3 holds. We say the dimension characterizes bandit learnability if for every integer $d$ and $\epsilon , \delta > 0$ , there exists integers $m , M$ so that for every $\mathcal { F }$ the following holds: (1) if ${ \mathfrak { D } } ( { \mathcal { F } } ) \geqslant d$ , then $\mathrm { Q C } _ { \epsilon , \delta } ^ { 0 }$ is at least $m$ , and (2) if $\mathfrak { D } ( \mathcal { F } ) < d$ , then $\mathrm { Q C } _ { \epsilon , \delta } ^ { 0 }$ is at most $M$ . The integers $m , M$ tend to $\infty$ as $d$ tends to $\infty$ . Perhaps the most well-known example of a combinatorial dimension in the context of bandit learning is the eluder dimension (Russo and Van Roy, 2013). Over more than a decade, it has been a central technique in the context of bandits as well as reinforcement learning (RL) (Li et al., 2022; Wang et al., 2020; Jin et al., 2021). It can be easily verified that the eluder dimension does satisfy the finite character property. However, it is also known there are arbitrarily large gaps between bounds obtained via the eluder dimension and related combinatorial measures (Brukhim et al., 2023). The following theorem shows that there is no non-trivial dimension that satisfies the finitecharacter property and also characterizes bandit learnability. Our result holds regardless of the assumed cardinality of the continuum, and within standard ZFC set theory. Our findings complement the celebrated result from (Hanneke and Yang, 2023), that demonstrates a particular reward function class for which bandit learnability (or EMX learnability; Ben-David et al., 2019) depends on the cardinality of the continuum and is therefore independent of the standard set theory ZFC axioms. Our result implies that even when we restrict our attention to classes for which bandit learnability is provable within ZFC, there cannot exist a dimension with the finite-character property characterizing bandit learnability. Theorem 4 (No finite-character dimension for bandits) Le $\cdot \chi , y$ be arbitrary (possibly infinite) sets, of size $| \mathcal { V } | \geqslant | \mathcal { X } | \geqslant d + 1$ for some integer $d > 2$ . Let $\mathfrak { D }$ be a dimension for bandit classes in $y x$ that satisfies the finite character property, and such that $\exists { \mathcal { F } }$ with $\mathfrak { D } ( \mathcal { F } ) \geqslant d .$ . Then, for any $\epsilon , \delta \geqslant 0$ , there exist $\mathcal { F } \subseteq \mathcal { V } ^ { \mathcal { X } }$ for which ${ \mathfrak { D } } ( { \mathcal { F } } ) \geqslant d ,$ , but the query complexity of bandit-learning $\mathcal { F }$ is bounded by $\mathrm { Q C } _ { \epsilon , \delta } ^ { 0 } ( \mathcal { F } ) \leqslant 2$ . In particular, $\mathfrak { D }$ does not characterize bandit learnability. Proof Consider a class $\mathcal { F } \subseteq \mathcal { V } ^ { \mathcal { X } }$ such that ${ \mathfrak { D } } ( { \mathcal { F } } ) \geqslant d > 2$ . By the finite-character assumption, and since ${ \mathfrak { D } } ( { \mathcal { F } } ) \geqslant d $ , there exists a shattering function $V _ { d }$ , a set $X = \{ x _ { 1 } , . . . , x _ { d } \}$ and a set of vectors $F = \{ v _ { 1 } , . . . , v _ { n } \} \in { \mathcal { F } } | _ { X }$ for some integer $n$ , that is, $F \in ( \mathcal { V } ^ { d } ) ^ { n }$ , such that $V _ { d } ( X , F ) = \mathrm { Y E S }$ . Since $\vert { \mathcal { X } } \vert > d$ , there must exist a point $x _ { 0 } \in \mathcal { X }$ such that $x _ { 0 } \notin X$ . We define a new class $\mathcal { F } ^ { \prime }$ over the domain $\mathcal { X }$ as follows. For any $f \in { \mathcal { F } }$ , we define $f ^ { \prime } \in \mathcal { F } ^ { \prime }$ such that: $$ f ^ { \prime } ( x ) = { \left\{ \begin{array} { l l } { f ( x ) } & { { \mathrm { i f ~ } } x \neq x _ { 0 } , } \\ { \arg \operatorname* { m a x } _ { x \in { \mathcal { X } } } f ( x ) } & { { \mathrm { i f ~ } } x = x _ { 0 } . } \end{array} \right. } $$ Then, the new class $\mathcal { F } ^ { \prime } \subseteq \mathcal { V } ^ { \mathcal { X } }$ consists of all functions $f ^ { \prime }$ of the above form. We have that $\vert \mathcal { F } ^ { \prime } \vert \leqslant \vert \mathcal { F } \vert$ . We now want to show the following two properties hold: (1) the query complexity of $\mathcal { F } ^ { \prime }$ is at most 2, and (2) $\mathfrak { D } ( \mathcal { F } ^ { \prime } ) \geqslant d$ . It suffices to show (1) and (2) to complete the proof. First, to show (1), notice that any algorithm can first query $x _ { 0 }$ and obtain the value $x _ { 1 } : =$ arg $\operatorname* { m a x } _ { x \in \mathcal { X } } f ( x )$ . Then, querying $x _ { 1 }$ either immediately attains the optimal value of $f ^ { \prime }$ , or it may hold that $f ( x _ { 1 } ) \leqslant x _ { 1 }$ in which case $x _ { 0 }$ is the optimal value, since $f ^ { \prime } ( x _ { 0 } ) = x _ { 1 }$ . Thus, at most 2 queries are needed to determine the optimal value of any $f ^ { \prime } \in \mathcal { F } ^ { \prime }$ up to any $\epsilon \geqslant 0$ . Next, to show (2), simply observe that the set $F$ above is contained in $\mathcal { F } ^ { \prime } | _ { X }$ . Then, since $V _ { d } ( X , F ) = V _ { d } ( X , \mathcal { F } ^ { \prime } | _ { X } ) = \mathrm { Y E S }$ we get that $X$ is also shattered by hypothesis class $\mathcal { F } ^ { \prime }$ with respect to $V _ { d }$ and so $\mathfrak { D } ( \mathcal { F } ^ { \prime } ) > d$ . Remark 5 (“Reverse” finite character property) The property in Theorem 3 requires that a lower bound on the dimension be demonstrated by finitely many domain points $X$ and members of ${ \mathcal { F } } | _ { X }$ . Indeed, as observed by Ben-David et al. (2019), all standard notions of dimensions in statistical and online learning satisfy this property. One may also consider an alternative property which requires that an upper bound on the dimension be demonstrated by finitely many domain points $X$ and members of ${ \mathcal { F } } | _ { X }$ . However, one can easily show that there cannot exist a dimension satisfying both this property and characterizing bandit learnability, for any infinite class. # 4. Hardness of bandit learning In this section, we study the computational efficiency of bandit learning in comparison to standard (albeit possibly computationally hard) algorithmic operations often considered in learning theory. A fundamental example is empirical risk minimization (ERM), which can be used to find a hypothesis consistent with the observed data (as is sufficient, for instance, for PAC learnability). In interactive learning settings, estimation algorithms are often used both to select consistent hypotheses and to make predictions (see, e.g., Foster et al., 2023; Brukhim et al., 2023). Given these, one might naturally expect that if a function class supports efficient algorithms for such tasks, it should also be efficiently learnable in the bandit setting. Quite surprisingly, we prove that this intuition fails. We construct a class of reward functions where the optimal action can be identified with just two queries, yet no polynomial-time algorithm can achieve this, unless $\mathrm { R P } \ = \ \mathrm { N P }$ . Furthermore, we show that this class does admit efficient algorithms for standard learning tasks, highlighting that in this case the computational hardness arises solely from the nature of the bandit-learning task. Commonly used algorithmic procedures Below we give 3 definitions of the relevant algorithmic procedures we will consider in the main theorem presented in this section, Theorem 9. Specifically, we formally define a consistency (ERM) algorithm, an online estimation algorithm, and a maximization algorithm, as follows. Definition 6 (Consistency (ERM) algorithm) An algorithm Alg is a consistency (ERM) algorithm for a class ${ \mathcal { F } } \subseteq [ 0 , 1 ] ^ { A }$ if for every $f \in { \mathcal { F } }$ and for every set $S = \{ ( a _ { 1 } , f ( a _ { 1 } ) ) , \dots , ( a _ { m } , f ( a _ { m } ) ) \}$ , where each $a _ { i } \in { \mathcal { A } } ,$ , when given $S$ as input, Alg returns ${ \hat { f } } \in { \mathcal { F } }$ such that for all $i = 1 , \ldots , m i t$ holds that $f ( x _ { i } ) = { \hat { f } } ( x _ { i } )$ . Definition 7 (Online estimation algorithm) $\textstyle A n$ online estimation algorithm for ${ \mathcal { F } } \subseteq [ 0 , 1 ] ^ { A }$ is an algorithm that at each round $t = 1 , \dots , T$ , when given a sequence of past observations $( a _ { 1 } , f ( a _ { 1 } ) )$ , $\ldots , ( a _ { t - 1 } , f ( a _ { t - 1 } ) )$ , for some $f \in { \mathcal { F } }$ , it returns an estimator $\hat { f } _ { t } \in \mathcal { F }$ . The algorithm has decaying estimation error $i f$ there exists $E S T ( T ) \geqslant 0$ growing sublinearly in $T$ , that is, ${ \cal E S T } ( T ) = o ( T )$ , such that for any sequence $a _ { 1 } , \dotsc , a _ { T } \in { \mathcal { A } }$ , we have $$ \sum _ { t = 1 } ^ { T } \left( \hat { f } _ { t } ( a _ { t } ) - f ( a _ { t } ) \right) ^ { 2 } \ \leqslant E S T ( T ) . $$ Definition 8 (Maximizing algorithm) An algorithm Alg for a class ${ \mathcal { F } } \subseteq [ 0 , 1 ] ^ { A }$ is $a$ maximizing algorithm if for every $f \in { \mathcal { F } }$ and every $\epsilon > 0$ , it returns $\hat { a } \in \mathcal A$ such that $$ f ( \hat { a } ) \geqslant \operatorname* { s u p } _ { a \in \mathcal { A } } f ( a ) - \epsilon . $$ A maximizing algorithm for a class $\mathcal { F }$ over a finite action set $\mathcal { A }$ is said to be efficient if each function $f \in { \mathcal { F } }$ has a concise representation using $O ( \mathrm { p o l y - l o g } ( | A | ) )$ bits, and the algorithm has running time that is polynomial in the size of the input, i.e., poly $( \log ( | A | ) , 1 / \epsilon )$ . Hardness of bandit learning Recall that the complexity class RP (randomized polynomial time; Gill III, 1974; Valiant and Vazirani, 1985) is the class of decision problems solvable in polynomial time by a probabilistic Turing machine such that: if the answer is “yes”, at least $1 / 2$ of computation paths accept; if the answer is “no”, all computation paths reject. The following theorem demonstrates a reduction from the NP-complete problem of Boolean satisfiability to bandit learning, using a construction of a function class which at the same time allows efficient algorithms for standard learning algorithms. This establishes hardness of bandit learning, under the assumption that $\mathrm { R P } \neq \mathrm { N P }$ . Theorem 9 (Hardness of bandit learning) For every $n \in \mathbb { N } _ { \mathrm { \Sigma } }$ , there exists a finite function class ${ \mathcal { F } } _ { n } \subseteq [ 0 , 1 ] ^ { A _ { n } }$ over action set $\mathcal { A } _ { n }$ of size $2 ^ { n + 1 } + 1$ , such that for every $\epsilon , \delta \geqslant 0$ , $$ \begin{array} { r } { \mathrm { Q C } _ { \epsilon , \delta } ^ { 0 } ( \mathcal { F } _ { n } ) \leqslant 2 , } \end{array} $$ and such that the following holds. If there exists a bandit learning algorithm for every ${ \mathcal { F } } _ { n }$ with running time that is polynomial in $n$ , then $\mathrm { R P = N P }$ . Moreover, each class ${ \mathcal { F } } _ { n }$ admits efficient deterministic algorithms as follows: • The class ${ \mathcal { F } } _ { n }$ admits a consistency (ERM) algorithm, of runtime $O ( n ^ { 2 } )$ . • The class ${ \mathcal { F } } _ { n }$ admits an online estimation algorithm, of runtime $O ( n ^ { 2 } )$ and $E S T ( T ) = O ( 1 )$ . • The class ${ \mathcal { F } } _ { n }$ admits a maximizing algorithm, of runtime ${ \tilde { O } } ( n ^ { 2 } ) .$ , for every $\epsilon \geqslant 0$ . Remark 10 We remark that although Theorem 9 is stated in the noise-free setting, a similar result can also be proved in the noisy setting. First, it can be shown that Gaussian noise model with sufficiently low variance $\sigma \approx 1 / 2 ^ { n }$ is not qualitatively different from the noise-free case.1 In particular, for such small values of $\sigma$ , we obtain $\mathrm { Q C } _ { \epsilon , \delta } ^ { \sigma } ( \mathcal F _ { n } ) \leqslant 2$ as well as all other statements from Theorem 9, where the guarantees for efficient algorithms now hold with high probability. More generally, our construction exhibits a trade-off between the optimal query complexity and the variance of the noise model, such that a large-variance noise model can be incorporated while increasing the optimal query complexity. In particular, Theorem 9 could be extended to the noisy setting under Gaussian noise with large, constant variance (e.g., $\sigma = 1 .$ ), but query complexity of order $\mathrm { Q C } _ { \epsilon , \delta } ^ { \sigma } = \tilde { O } ( n ^ { 2 } )$ , for every $\epsilon , \delta$ . Thus, although the optimal QC is polynomial in $n$ , a similar construction as shown below demonstrates that there is no bandit learning algorithm that runs in polynomial time, unless $\mathrm { R P = N P }$ . See Section 5 for related results and discussion. Proof [Proof of Theorem 9] Throughout the proof, we fix $\boldsymbol { n } \in \mathbb { N }$ and simply denote $A _ { n } , F _ { n }$ by ${ \mathcal { A } } , { \mathcal { F } }$ , for brevity. We start by defining $\mathcal { A }$ as follows: $\mathcal { A } = \{ \star \} \cup \mathcal { A } ^ { ( 2 ) } \cup \mathcal { A } ^ { ( 3 ) }$ , such that $\mathcal { A } ^ { ( 2 ) } = \{ 0 , 1 \} ^ { n }$ , and $\mathcal { A } ^ { ( 3 ) } = [ 2 ^ { n } ]$ . We will construct a class ${ \mathcal { F } } \subseteq [ 0 , 1 ] ^ { A }$ which is best thought of as represented by a query tree of the following structure: $\star$ corresponds to the root node, and actions in $\mathcal { A } ^ { ( 2 ) }$ and $\mathcal { A } ^ { ( 3 ) }$ correspond to nodes of the second and third layer of the tree, respectively. Before defining the class $\mathcal { F }$ , we consider the following set: $$ \Phi = \{ { \mathrm { a l l ~ } } 3 { \mathrm { C N F ~ f o r m u l a s ~ } } \phi { \mathrm { ~ o n ~ } } n { \mathrm { ~ v a r i a b l e s ~ a n d ~ a t ~ m o s t ~ } } n ^ { 2 } { \mathrm { ~ c l a u s e s } } \} . $$ For every $\phi \in \Phi$ that is satisfiable, we denote by $a _ { \phi } ^ { * }$ the satisfying assignment for $\phi$ that is minimal according to the natural ordering on $\mathcal { A } ^ { ( 2 ) }$ . Define ${ \mathcal { F } } \subseteq [ 0 , 1 ] ^ { A }$ as follows: $\mathcal { F } = \mathcal { F } ^ { s a t } \cup \mathcal { F } ^ { a l l }$ where, $$ \begin{array} { r } { \begin{array} { r } { z ^ { a l l } = \{ f _ { \phi } : \forall \phi \in \Phi \} , \qquad \mathrm { a n d } \qquad { \mathcal { F } } ^ { s a t } = \{ f _ { \phi , c } : \forall \phi \in \Phi \mathrm { ~ s . t . ~ } \phi \mathrm { ~ i s ~ s a t i s f i a b l e , ~ } c \in { \mathcal { A } } ^ { ( 3 ) } \} , } \end{array} } \end{array} $$ and where the functions of the form $f _ { \phi , c }$ and $f _ { \phi }$ are defined as follows: $$ f _ { \phi , c } ( a ) = \left\{ \begin{array} { l l } { \mathrm { e n c o d e } ( \phi ) } & { \mathrm { i f ~ } a = \star } \\ { \frac { 1 } { 2 ^ { n + 1 } } \cdot c } & { \mathrm { i f ~ } a = a _ { \phi } ^ { * } \in \mathcal { A } ^ { ( 2 ) } } \\ { 1 } & { \mathrm { i f ~ } a = c \in \mathcal { A } ^ { ( 3 ) } } \\ { 0 } & { \mathrm { o t h e r w i s e } , } \end{array} \right. $$ where encode $\left( \phi \right)$ encodes the formula by some value in $\bigl [ \frac 1 4 , \frac 1 2 \bigr ]$ . For example, encode $( \cdot )$ can be implemented as follows. Each literal can first be encoded using $\log ( n ) + 1$ bits (the variable index plus 1 bit for negation). The full formula requires $O ( n ^ { 2 } \log ( n ) )$ bits, and this binary string could be embedded in $[ 0 , 1 / 5 )$ by writing it after the decimal point. Lastly, this value could then be shifted by $1 / 4$ so that it lies in the desired range $\bigl [ \frac { 1 } { 4 } , \frac { 1 } { 2 } \bigr ]$ . This encoding can be easily decoded by any learner if there is no noise added to the encoded value $f _ { \phi , c } ( \star )$ . Query complexity 2: Let us argue that the query complexity of this class $\mathcal { F }$ is indeed at most 2. Specifically, we will describe a deterministic algorithm $\mathrm { A l g }$ for $\mathcal { F }$ such that for any $f \in { \mathcal { F } }$ it requires at most 2 queries to recover the optimal action. First, $\mathrm { A l g }$ queries $a = { \star }$ and observes the encoding encode $\left( \phi \right)$ , which allows it to recover the formula $\phi$ . Then, by brute force search over all assignments $a \in \mathcal { A } ^ { ( 2 ) }$ , it can obtain $a _ { \phi } ^ { * }$ , if there is one, and if there is none, the optimal action is simply $\star$ . If $\phi$ is satisfiable, $\mathrm { A l g }$ will query $a = a _ { \phi } ^ { * }$ and if the value is 0, then the optimal action is again $\star$ . Otherwise, Alg observes ${ \frac { 1 } { 2 ^ { n + 2 } } } \cdot c$ . Thus, it has recovered the optimal action $c$ for the function $f$ , with only 2 queries. The third query of $a = c$ will then yield the optimal value. Hardness: We prove hardness for any bandit learning algorithm $B$ for $\mathcal { F }$ . Fix $\epsilon = 1 / 1 0$ , and note that by the construction of the class, any algorithm that finds an $\epsilon$ -optimal action, has actually found an optimal one. Let $a _ { B }$ denote the final query submitted by any algorithm $B$ . Assume towards contradiction that there exists an algorithm $B$ such that for every $f \in { \mathcal { F } }$ , by using only $\operatorname { p o l y } ( n )$ runtime, it outputs an action $a _ { B }$ such that: $$ \mathbb { P } _ { B } \left[ f ( a _ { B } ) = \operatorname* { m a x } _ { a \in \mathcal { A } } f ( a ) \right] \geqslant 3 / 4 . $$ We will prove this solves the SAT decision problem in poly $( n )$ time and in a probabilistic manner, demonstrating that this NP-complete problem is in RP, in contradiction to the assumption that $\mathrm { R P } \neq \mathrm { N P }$ . Specifically, we will describe how, given access to $B$ , one can construct an algorithm so that for every $\phi \in \Phi$ , if it is satisfiable the algorithm accepts (declares ”yes”) with probability at least $1 / 2$ , and if not - the algorithm always rejects (declares “no”). Given any formula $\phi$ , we simulate running the algorithm $B$ by responding exactly as if $f _ { \phi } \in \mathcal { F } ^ { a l l }$ would respond. Specifically, for each query made by $B$ we respond as follows: if the query is $\star$ , we respond with encode $( \star )$ , and for any other we respond with 0, until either $B$ queries for $a \in \mathcal { A } ^ { ( 2 ) }$ which is a satisfying assignment for $\phi$ (which we can easily verify efficiently), in which case we halt the simulation, or until $B$ terminates and returns its final query $a _ { B }$ . We will now show that our simulation can solve the SAT decision problem with a one-sided error with constant probability, as detailed next. First, assume $\phi$ is not satisfiable. Then, $B$ can never query for $a \in \mathcal { A } ^ { ( 2 ) }$ which is a satisfying assignment for $\phi$ , and so we would run the simulation until $B$ terminates, in $\mathrm { p o l y } ( n )$ time, after which we declare that $\phi$ is not satisfiable. This always occurs, thus whenever $\phi$ is not satisfiable then with probability 1 we reject. Now, assume $\phi$ is satisfiable. We have the following lemma, whose proof is deferred to the appendix. Lemma 11 Let $\mathcal { F }$ as constructed above, and let $B$ be any bandit learner for $\mathcal { F }$ as above (i.e., for every $f \in { \mathcal { F } }$ its output satisfies Equation (2)). Fix any $\phi \in \Phi$ that is satisfiable. Then, there exists $c \in \mathcal { A } ^ { ( 3 ) }$ such that if $B$ is being run with $f _ { \phi , c }$ and $a _ { 1 } , \ldots , a _ { m }$ denotes its query sequence during that run, it holds that: $$ \mathbb { P } _ { B } \left[ \exists i \in [ m ] , \quad a _ { i } \ i s \ a \ s a t i s f y i n g \ a s s i g n m e n t f o r \phi \ \land \ \forall j < i , \ a _ { j } \neq c \right] \geqslant \frac { 3 } { 4 } - \frac { 2 m } { 2 ^ { n } } . $$ Then, by Lemma 11 we have that there exists some $c \in \mathcal { A } ^ { ( 3 ) }$ such that when $B$ interacting with $f _ { \phi , c }$ , and $a _ { 1 } , \ldots , a _ { m }$ denotes its query sequence during that run, then : $$ \mathbb { P } _ { B } \left[ \exists i \in [ m ] , \quad a _ { i } \mathrm { ~ i s ~ a ~ s a t i s f y i n g ~ a s s i g n m e n t ~ f o r ~ \phi ~ } \land \forall j \leqslant i , \ a _ { j } \neq c \right] \geqslant \frac { 1 } { 2 } , $$ since $m$ is some polynomial in $n$ , then for all sufficiently large $n$ we have $\frac { 2 m } { 2 ^ { n } } \leqslant 1 / 1 0$ . Importantly, we do not need to know what this $c$ is during simulation. The reason is that, by the above, we have that with probability at least $1 / 2$ we will be able to simulate a response sequence by $f _ { \phi , c }$ since it will be identical to the response sequence by $f _ { \phi }$ , until we observe a satisfying assignment, in which case we halt. Thus, with probability at least $1 / 2$ we will observe a satisfying assignment, and declare ”yes”. Notice that with probability $< 1 / 2$ our simulation will not be consistent with $f _ { \phi , c }$ but that is of no concern to us, as we may reject in this case. It is, however, crucial that $B$ runs in $\mathrm { p o l y } ( n )$ time even when interacting with $f _ { \phi }$ rather than with $f _ { \phi , c }$ , which indeed holds as $f _ { \phi } \in { \mathcal { F } }$ . The proof of the theorem is then concluded by proving the existence of efficient algorithms for the class $\mathcal { F }$ , which holds by Lemma 18, given in the appendix. # 5. Noise-free vs. noisy setting query complexity The first question we address is whether there exists any provable relationship between the noisefree query complexity $\mathrm { Q C } _ { \epsilon , \delta } ^ { 0 } ( \mathcal { F } )$ and $\mathrm { Q C } _ { \epsilon , \delta } ^ { \sigma } ( \mathcal { F } )$ . First we show there exist function classes for which their noise-free query complexity is constant but such that $( \epsilon , \delta )$ -complexity is unbounded. Proposition 12 Given $\epsilon \in [ 0 , 1 / 2 )$ , there exists a function class $\mathcal { F }$ such that $\mathrm { Q C } _ { \epsilon , \delta ^ { \prime } } ^ { 0 } ( \mathcal { F } ) = 1$ for all $\delta ^ { \prime } \in [ 0 , 1 )$ but $\mathrm { Q C } _ { \epsilon , \delta } ^ { \sigma } ( \mathcal { F } ) = \infty$ for all $\delta \in [ 0 , 1 / 2 )$ and all $\sigma > 0$ . The function class $\mathcal { F }$ used to prove Proposition 12 is based on a “informative action” construction where the action space equals $\{ 0 \} \cup \mathbb { N }$ . The optimal action of any function in $\mathcal { F }$ is indexed by $n \in \mathbb { N }$ . Action 0 is “informative” because its mean reward reveals the identity of the optimal action so that $\mathrm { Q C } _ { \epsilon , \delta ^ { \prime } } ( \mathcal { F } ) = 1$ . Nonetheless in the noisy setting when $n$ goes to infinity, estimating the reward of action 0 or finding the optimal action via enumeration of $\mathbb { N }$ requires a number of queries growing with $n$ . The formal proof is given in Appendix B.1. Upper and lower bound bounds for $\mathrm { Q C } _ { \epsilon , \delta } ^ { 1 } ( \mathcal { F } )$ were derived by Hanneke and Wang (2024) for the high-noise regime (i.e., when $\sigma$ is of order 1 for functions with values in $[ 0 , 1 ] )$ based on the generalized maximin volume $\gamma _ { \mathcal { F } , \epsilon }$ of $\mathcal { F }$ (see definition below). Definition 13 (Generalized maximin volume; Hanneke and Wang, 2024) Generalized maximin volume of a function class $\mathcal { F }$ is defined as $$ \gamma _ { { \mathcal F } , \epsilon } = \operatorname* { s u p } _ { p \in \Delta ( { \cal A } ) } \operatorname* { i n f } _ { f \in { \mathcal F } } \mathbb { P } _ { a \sim p } \left( \operatorname* { s u p } _ { a ^ { * } } f ( a ^ { * } ) - f ( a ) \ \leqslant \ \epsilon \right) , $$ where $\Delta ( \mathcal { A } )$ is the set of all distributions on $\mathcal { A }$ . Theorem 1 of Hanneke and Wang (2024) presents an elegant and insightful result, establishing that $\mathrm { Q C } _ { \epsilon , \delta } ^ { 1 } ( \mathcal { F } )$ can be lower bounded by $\Omega \left( \log ( 1 / \gamma _ { \mathcal { F } , \epsilon } ) \right)$ and upper bounded (up to constant and logarithmic factors) by $1 / ( \gamma _ { \mathcal { F } , \epsilon / 2 } \cdot \epsilon ^ { 2 } )$ . In this work we explore the low-noise regime where these results break down. In Theorem 14 we show among other things that for any $K \in \mathbb { N }$ and $\epsilon \in [ 0 , 1 / 2 )$ there is a function class $\mathcal { F }$ such that $\gamma _ { \mathcal { F } , \epsilon } = K$ but there exist values of $\sigma > 0$ where $\mathrm { Q C } _ { \epsilon , 1 / 4 } ^ { \sigma } ( \mathcal { F } ) =$ $1 < \log ( 1 / \gamma _ { \mathcal { F } , \epsilon } )$ . This result shows $\mathrm { Q C } _ { \mathcal { F } , \epsilon } ^ { \sigma } ( \mathcal { F } )$ behaves fundamentally differently in the low-noise and high-noise regimes, highlighting the need for better theories to understand this phase transition. Theorem 14 There exist universal constants c, $\bar { c } > 0$ such that for every integer $K \geqslant 2$ there exists $a$ function class ${ \mathcal { F } } \subseteq [ 0 , 1 ] ^ { A }$ with action space $| \mathcal { A } | = K + 1$ such that for every $\epsilon \in [ 0 , 1 / 2 )$ it holds that $\gamma _ { \mathcal { F } , \epsilon } = 1 / K$ and if $\sigma ^ { 2 } \geqslant \frac { 1 } { \bar { c } K ^ { 2 / 3 } }$ c¯K2{3 then $$ \bar { c } K ^ { 2 / 3 } \sigma ^ { 2 } \leqslant \mathrm { Q C } _ { \epsilon , 1 / 4 } ^ { \sigma } ( \mathcal { F } ) \leqslant c \log ^ { 2 / 3 } ( K ) K ^ { 2 / 3 } \sigma ^ { 2 } . $$ In particular, $$ \bar { c } \log ( 1 / \gamma _ { \mathcal { F } , \epsilon } ) \sigma ^ { 2 } \leqslant \mathrm { Q C } _ { \epsilon , 1 / 4 } ^ { \sigma } ( \mathcal { F } ) , $$ and if $\begin{array} { r } { \sigma ^ { 2 } \leqslant \frac { 1 } { c \log ^ { 2 / 3 } ( K ) K ^ { 2 / 3 } } } \end{array}$ then $$ \mathrm { Q C } _ { \epsilon , 1 / 4 } ^ { \sigma } ( \mathcal { F } ) = \mathrm { Q C } _ { \epsilon , 0 } ^ { 0 } ( \mathcal { F } ) = 1 . $$ Similar to Proposition 12, the function class $\mathcal { F }$ in Theorem 14 is constructed around an “informative action” structure, where the action space is given by $\{ 0 \} \cup [ K ]$ . The optimal action belongs to $[ K ]$ , while the mean reward of action 0 (the informative action) reveals its identity. This construction differs from Proposition 12 in its encoding representation. Specifically, our design ensures that strategies leveraging the information encoded in the mean reward of action 0 achieve greater efficiency compared to the $\mathcal { O } ( \sigma ^ { 2 } K )$ queries needed by a strategy that individually estimates the mean rewards of all actions in $[ K ]$ . Interestingly, in the large-variance regime the optimal data collection strategy that achieves the lower bound rate works by querying action 0 sufficiently to narrow down the optimal action choices to $\mathcal { O } ( K ^ { 2 / 3 } )$ actions. When the noise variance is sufficiently small, the mean reward encoded by action 0 can be inferred from a noisy sample with probability of error at most $1 / 4$ , leading to a query complexity of just 1. These results highlight the intricate balance between exploiting the information structure of the function class—encoded here by action 0—and relying on brute-force exploration by following the policy dictated by the generalized maximin volume in Equation 3. The formal proof is given in Appendix B.3. Building on these results we introduce the $( \epsilon , \delta )$ -gap of a function class $\mathcal { F }$ , denoted $\mathrm { G a p } _ { \epsilon , \delta } ( \mathcal { F } )$ , which we then use to derive sufficient conditions on $\sigma$ to guarantee $\mathrm { Q C } _ { \epsilon , \delta } ^ { \sigma } ( { \mathcal F } ) \lesssim \mathrm { Q C } _ { \epsilon , \delta } ^ { 0 } ( { \mathcal F } )$ . Definition 15 (Informal: Gap of $\mathcal { F }$ ) Let ${ \mathcal { F } } \subseteq [ 0 , 1 ] ^ { A }$ be a finite function class and action space. Given $\epsilon , \delta \in [ 0 , 1 ]$ , we define $\mathrm { G a p } _ { \epsilon , \delta } ( \mathcal { F } )$ as the smallest difference between achievable function values for an action that an $( \epsilon , \delta )$ -optimal algorithm might play, given any positive probability history. Definition 21 in Appendix B.1 formalizes the description above. Our next result establishes that when $\sigma$ is small, the $( \epsilon , \delta )$ -query complexity of $\mathcal { F }$ is not too different from that of the noise-free $( \epsilon , \delta ^ { \prime } )$ -query complexity of $\mathcal { F }$ provided that $\delta ^ { \prime } < \delta$ . Theorem 16 Let $\delta , \delta ^ { \prime } \in ( 0 , 1 )$ such that $\delta > \delta ^ { \prime } \geqslant 0$ . For any finite class ${ \mathcal { F } } \subseteq [ 0 , 1 ] ^ { A }$ over a finite action space the noisy query complexity with zero-mean Gaussian noise with variance $\sigma ^ { 2 }$ such that $\sigma ^ { 2 } < \frac { \mathrm { G a p } _ { \epsilon , \delta ^ { \prime } } ^ { 2 } ( \mathcal { F } ) } { 4 \log ( 2 \mathrm { Q C } _ { \epsilon , \delta ^ { \prime } } ^ { 0 } ( \mathcal { F } ) / ( \delta - \delta ^ { \prime } ) ) }$ satisfies: $$ \mathrm { Q C } _ { \epsilon , \delta } ^ { \sigma } ( { \mathcal F } ) \leqslant \mathrm { Q C } _ { \epsilon , \delta ^ { \prime } } ^ { 0 } ( { \mathcal F } ) $$ To prove theorem 16 we show that we can construct a noisy feedback algorithm $\mathrm { A l g }$ based on an $( \epsilon , \delta ^ { \prime } )$ -optimal noise-free algorithm $\mathrm { A l g } ^ { \prime }$ that is guaranteed to have an error probability of at most $\delta$ . Alg uses nearest neighbors to transform noisy rewards into mean reward values. When the noise is small $\mathrm { A l g }$ recovers the correct mean rewards with an error probability at most $\delta - \delta ^ { \prime }$ . These rewards are fed into a copy of $\mathrm { A l g } ^ { \prime }$ and the suggested action exploration policies are executed. The resulting algorithm achieves the same query complexity as $\mathrm { A l g ^ { \prime } }$ with a slightly degraded error upper bound of $\delta$ . The inequality $\delta ^ { \prime } < \delta$ is required because for any level of non-zero gaussian noise $( \sigma > 0 )$ ) translating noisy rewards into their mean reward values will necessarily produce an irreducible error probability. The proof of Theorem 16 is given in Appendix B.1. # 6. Separation between regret and query complexity We study the separation between regret and query complexity, both in the noise-free and noisy settings. The regret of an algorithm $\mathrm { A l g }$ with action space $\mathcal { A }$ , interacting for $T$ rounds by producing an action $a _ { t } \in \mathcal A$ for $t = 1 , \dots , T$ and observing rewards generated by $f ^ { * }$ is defined as, $$ \mathrm { R e g r e t } _ { \mathrm { A l g } } ( T ) = \sum _ { t = 1 } ^ { T } \operatorname* { m a x } _ { a \in \mathcal { A } } f ^ { * } ( a ) - f ^ { * } ( a _ { t } ) . $$ A typical objective in the bandit online learning and reinforcement learning literature is to design algorithms that satisfy a sublinear regret bound such that $\begin{array} { r } { \operatorname* { l i m } _ { T \to \infty } \frac { \mathrm { R e g r e t } ( T ) } { T } = 0 } \end{array}$ RegretpT q “ 0. In this section, we explore whether achieving low query complexity and low regret are compatible objectives. We show negative results in this regard in the noise-free (Appendix C.1) and noisy settings (Section 6.1). In each of these scenarios, we show that it is impossible to construct algorithms that achieve optimal query complexity while also incurring sublinear regret. This holds because, in certain problems, any optimal algorithm for $\epsilon$ -arm identification must allocate a significant number of queries to actions that, while highly informative, result in substantial regret. # 6.1. Regret vs. QC: noisy case In this section we explore the compatibility of the optimal query complexity and regret minimization in noisy feedback problems. Similar to our results in the noise-free setting in Theorem 17 we show there are problems where the goal of finding an optimal action cannot be achieved without paying a regret scaling linearly with the query complexity; however, for the same function classes, there is an algorithm achieving regret scaling as the square root of the number of time-steps. Theorem 17 Let $d , T \in \mathbb { N }$ . There exists a function class $\mathcal { F }$ over action space $\mathcal { A }$ with unitvariance Gaussian noise such that $d \ \leqslant \ \mathrm { Q C } _ { 0 , 1 / 4 } ^ { 1 } ( \mathcal { F } ) \ \leqslant \ 8 0 d$ and any algorithm Alg such that $m _ { \mathrm { A l g } } ^ { 1 } ( 0 , 1 / 4 ) \leqslant T$ satisfies $$ \operatorname* { m a x } _ { f \in \mathcal { F } } \mathbb { E } _ { \mathrm { A l g } } [ \mathrm { R e g r e t } ( T , f ) ] \geqslant \frac { d } { 1 2 8 } . $$ Moreover, there is an algorithm $\mathrm { A l g ^ { \prime } }$ that satisfies $\begin{array} { r } { \operatorname* { m a x } _ { f \in \mathcal { F } } \mathbb { E } _ { \mathrm { A l g } ^ { \prime } } \left[ \mathrm { R e g r e t } ( T , f ) \right] \ \leqslant \ 8 \sqrt { 2 T \log ( T ) } } \end{array}$ for all $T \in \mathbb { N } .$ . Theorem 17 suggests there exists a function class with query complexity $\mathcal { O } ( d )$ such that when $T = \mathcal { O } ( d ^ { \alpha } )$ for $\alpha < 2$ then no algorithm $\mathrm { A l g }$ that is able to find an optimal action in $T$ queries can also satisfy an $\tilde { \mathcal { O } } ( \sqrt { T } )$ regret bound. Nonetheless, for the same function class, there are algorithms that achieve $\tilde { \mathcal { O } } ( \sqrt { T } )$ regret bounds. Theorem 17 is closely related to Theorem 1 of Bubeck et al. (2011). While Theorem 1 of Bubeck et al. (2011) rules out the existence of algorithms that achieve optimal regret and simple regret simultaneously, Theorem 17 establishes that no algorithm can achieve both optimal query complexity and optimal regret. Although related, the notions of simple regret and query complexity are different. The simple regret for a fixed horizon $T$ is the expected gap between the algorithm’s output arm $\boldsymbol { a } _ { T }$ and the optimal arm $a ^ { * }$ . In contrast, query complexity can be thought of as the minimum horizon $T$ such that the optimal simple regret is at most $\epsilon$ . The function class $\mathcal { F }$ used to prove Theorem 17 has an “information lock” structure. The action space is divided into two sets $\mathcal { A } _ { 1 }$ and $A _ { 2 }$ . The values of the mean rewards of actions in $\ b { A _ { 1 } }$ can be used to infer the identity of the mean optimal action. Actions in $\ b { A _ { 1 } }$ have large regret and their mean rewards are equal to $1 / 2 + \epsilon _ { 1 }$ or $1 / 2 - \epsilon _ { 1 }$ while the mean rewards of actions in $\boldsymbol { \mathcal { A } } _ { 2 }$ are equal to 1 or $1 - \epsilon _ { 2 }$ for parameters $\epsilon _ { 1 } , \epsilon _ { 2 } \in [ 0 , 1 ]$ such that $\epsilon _ { 1 } \geqslant \epsilon _ { 2 }$ . To prove Theorem 17 we first establish that $\mathrm { Q C } _ { 0 , 1 / 4 } ^ { 1 } ( \mathcal { F } ) = \Theta ( 1 / \epsilon _ { 1 } ^ { 2 } )$ . Second, we show that when $\epsilon _ { 2 } \approx \epsilon _ { 1 } ^ { 2 }$ , then any algorithm Alg such that $m _ { \mathrm { A l g } } ^ { 1 } ( 0 , 1 / 4 ) \ \leqslant \ T$ must also incur regret satisfying $\begin{array} { r } { \operatorname* { m a x } _ { f \in \mathcal { F } } \mathbb { E } [ \mathrm { R e g r e t } ( T , f ) ] \geqslant \Omega ( 1 / \epsilon _ { 1 } ^ { 2 } ) } \end{array}$ . The proof of Theorem 17 follows by setting $\epsilon _ { 1 } \approx 1 / \sqrt { d }$ . Finally, since the problem in this class is an instance of multi-armed bandits, the UCB algorithm is guaranteed to collect sublinear regret. The formal proof of Theorem 17 can be found in Appendix C.2. In Appendix C.1, we establish analogous results for the noise-free setting. Notably, these findings do not follow directly from Theorem 17. While Theorem 17 is stated for $\sigma = 1$ , the query complexity in this construction approaches 1 as $\sigma$ tends to zero, preventing a straightforward extension to the noise-free case.
We study the task of bandit learning, also known as best-arm identification, under the assumption that the true reward function f belongs to a known, but arbitrary, function class F. We seek a general theory of bandit learnability, akin to the PAC framework for classification. Our investigation is guided by the following two questions: (1) which classes F are learnable, and (2) how they are learnable. For example, in the case of binary PAC classification, learnability is fully determined by a combinatorial dimension - the VC dimension- and can be attained via a simple algorithmic principle, namely, empirical risk minimization (ERM). In contrast to classical learning-theoretic results, our findings reveal limitations of learning in structured bandits, offering insights into the boundaries of bandit learnability. First, for the question of "which", we show that the paradigm of identifying the learnable classes via a dimension-like quantity fails for bandit learning. We give a simple proof demonstrating that no combinatorial dimension can characterize bandit learnability, even in finite classes, following a standard definition of dimension introduced by Ben-David et al. (2019). For the question of "how", we prove a computational hardness result: we construct a reward function class for which at most two queries are needed to find the optimal action, yet no algorithm can do so in polynomial time unless RP=NP. We also prove that this class admits efficient algorithms for standard algorithmic operations often considered in learning theory, such as an ERM. This implies that computational hardness is in this case inherent to the task of bandit learning. Beyond these results, we investigate additional themes such as learning under noise, trade-offs between noise models, and the relationship between query complexity and regret minimization.
[ "cs.LG", "stat.ML" ]
# I. INTRODUCTION 𝐱 = (10!, 11!) 1 P# = XYXY P\$ = XXYY P! = XYYX (10)(11) (10)(11) (10)(11) & 比江 √& 1101 1011 11102 𝑣" = 1101! 𝑣" = 1011! 𝑣" = 1110! A aSpmacuel-tif-ildlinmg csuirovnea dSaFtaC, foir $\mathbf { x }$ otrto i na -wdiaymetno omnapl value, say $v$ that can be represented by a mapping function $T : \textbf { x } \mapsto \textbf \}$ . SFC mappings been widely used for multi-dimensional indexing. The idea is to first map multidimensional data points to one-dimensional values using the SFC mapping function, and then use one-dimensional indexing methods, e.g., a conventional B-Tree [1] or any of the recent learned indexes [2, 3, 4, 5], to index the mapped values. This has been explored both in the literature [6, 7, 8, 9, 10, 11, 12] and by various database systems, e.g., PostgreSQL [13], Amazon DynamoDB [14], Apache HBase [15], and many other systems. dimensions of input data to bit strings. Refer to Figure 1 for illustration, where data point ${ \bf x } = ( 2 , 3 )$ is converted into its two corresponding binary strings (one string per coordinate) with 2 bits for each dimension: $( 1 0 _ { 2 } , 1 1 _ { 2 } )$ . Then, the bit interleaving merges bits alternatively from the different bit strings to form one SFC value (in Figure 1, the bit interleaving adopts the XYXY merging scheme that merges the bit strings XX and YY to the SFC value XYXY, e.g., mapping Point $\mathbf { x }$ to $\begin{array} { r } { 1 1 0 1 _ { 2 } \overline { { \phantom { 1 } } } , } \end{array}$ ). There are extensive studies on designing SFCs, e.g., the Z-curve [16, 17, 18], the C-curve [19], and the Hilbert curve [19, 20, 21, 22]. For example, the Z-curve adopts a bit interleaving mapping scheme [23] that first converts the However, one common problem is that each SFC has its own fixed mapping scheme/function that cannot be adjusted to fit different datasets. The choice of one SFC for a dataset significantly affects query performance, and no single SFC can dominate the performance for all datasets and all query workloads (as shown in Fig. 2). To tailor a new SFC to fit the data and query workload properties, QUILTS [24] extends bit interleaving by considering other ways of merging bit strings. For example, instead of merging bits following XYXY, we can merge bits by following XXYY or XYYX to generate different SFC values at different regions of the multi-dimensional data set. Each pattern of merging bits is termed a bit merging pattern (BMP, for short), where each BMP can describe a different SFC (as will be explained in Section II in greater detailed). QUILTS evaluates all the candidate SFCs described by BMPs based on a given workload and data, and selects the optimal one using heuristic methods. QUILTS proposes to use multiple SFCs at the same time to index one data set so that the resulting mixed SFC is queryaware and is skew-tolerant for a given query pattern. However, the resulting SFC is static and hence does not change if the data distribution or the query workload changes over time. QUILTS makes the first attempt to utilize data and query workload properties to select an optimal SFC. However, like other SFCs, QUILTS applies a single BMP for the entire data space (i.e., QUILTS applies one BMP to compute the SFC values of all data points). Optimal SFCs may differ for different data subspaces. A detailed example of this situation is given in Section III-A that illustrates this problem. Another issue of QUILTS is that it does not provide an effective way of generating and evaluating candidate SFCs. The heuristic rules used by QUILTS are designed for very specific types of window queries (e.g., with a fixed area) and do not fit general query processing scenarios, where the workload includes more than one query type (with different areas or aspect ratios). For example, a heuristic rule used by QUILTS assumes that grid cells intersecting with a query should be continuous in SFC order, which may not hold for queries with different aspect ratios (this is elaborated on in greater detail in Section III-A). Further, QUILTS does not consider the scenario where the distributions of data and/or queries are changed, which results in sub-optimal query performance if the SFC is not updated. To address the limitations of an SFC with a single BMP, i.e., a single mapping scheme, our idea is to design different BMPs for different subspaces based on the data and query workload features, aiming to optimize query performance. The resulting SFCs used in the BMTree would comprise multiple BMPs, each corresponding to a subspace that we refer to by a piecewise SFC. We focus on three aspects of the piecewise SFC construction framework: (1) Designing the piecewise SFC: We propose a binary tree structure to help construct and design a piecewise SFC. (2) Learning the piecewise SFC. We design a data-driven learning-based method that can automatically construct the piecewise SFC according to the query workload to optimize query performance. (3) Updating piecewise SFC: We develop a mechanism aiming to efficiently update the piecewise SFC w.r.t. the updated query and data scenario. # A. Designing Piecewise SFCs How to design effective BMPs for different subspaces, while guaranteeing desirable properties of the overall mapping function to indexing data, is non-trivial. To achieve that, we propose to seamlessly integrate subspace partitioning and BMP generation. We develop a new structure termed the Bit Merging Tree (BMTree, for short) to recursively generate both the subspaces and the corresponding BMPs. In the BMTree, (1) Each node represents a bit from the binary string of the selected dimension, and its bit value (0 or 1) partitions the data at the node into two child nodes, and (2) Each leaf node represents a subspace, and the sequence of bit string from the root to the leaf node represents the BMP for the subspace. Further, we prove that the Piecewise SFC modeled by the BMTree maintains two desirable properties: Monotonicity [25] and Injection. Monotonicity is a desirable property for designing window query algorithms, which guarantees that the SFC values of data points in a query rectangle fall in the SFC value range formed by two boundary points of the query rectangle. Combining different SFCs from different subspaces to obtain a final SFC for the whole space may lead to the risk of breaking the monotonicity property. Similarly, it may also lead to an injection violation, i.e., that the mapping function may not return a unique mapped value for each input. We construct the BMTree in a principled way such that the two properties are guaranteed. # B. Learning Piecewise SFCs To address the limitation of heuristic algorithms in the SFC design, we propose to model building the BMTree as a Markov decision process (MDP, for short) [26], aiming to develop data-driven solutions for designing suitable BMPs for different subspaces. Specifically, we define the states, the actions, and the rewards signals of the MDP framework to build the BMTree such that the generated BMTree can optimize query performance. We leverage reinforcement learning and Monte Carlo Tree Search (MCTS, for short) [27], to learn a performance-aware policy and avoid local optimal settings. To improve performance, we design a greedy action selection algorithm for MCTS. Moreover, to improve training efficiency, we define a metric termed ScanRange as a proxy of the query performance (e.g., I/O cost or query latency), and apply ScanRange for the computation of rewards. # C. Updating Piecewise SFCs In situations where the distributions of data and queries change [28], the previously learned module faces an issue of having sub-optimal performance. Fully retraining a BMTree poses efficiency challenges due to the BMTree training cost, and the need to update all SFC values of data points maintained in the index. To address this issue, we propose a novel mechanism aligned with the BMTree structure that enables partial retraining, and hence reducing the overall cost. First, we introduce a distribution shift score to quantify the shift degree, and decide if retraining is necessary. Then, we develop an optimization potential score to identify which nodes of BMTree, when optimized, can significantly enhance query performance. We partially delete the nodes of the BMTree that need to be retrained, and develop an adapted training reinforcement learning environment (with the states, actions, and rewards adapted for partial retraining) and regenerate the BMTree with respect to the updated data and query workloads. The main contributions of this paper are as follows: (1) We propose the idea of piecewise SFCs that allows to design different BMPs for different subspaces by considering the data and query workload properties to deal with nonuniformly distributed data and query workloads. (2) To design piecewise SFCs, we introduce the BMTree to partition the data space into subspaces, and generate a BMP for each subspace. We prove that the piecewise SFC represented by a BMTree satisfies two properties, namely injection and monotonicity. (3) To build a BMTree, we develop an RL-based solution by modeling BMP design as a MDP, and design an MCTS-based BMTree construction algorithm. We develop the ScanRange metric to efficiently measure the window query performance on an SFC. As a result, the ScanRange metric speeds up the learning procedure. (4) To efficiently update a BMTree, we develop a mechanism that allows partially retraining of the BMTree when data and/or query distributions shift, and enhances the query performance with reasonable retraining costs. (5) We integrate our learned SFCs into the $\mathbf { B } ^ { + }$ -Tree index inside PostgreSQL and inside the learned spatial index RSMI [12]. Experimental results under both settings consistently show that the BMTree outperforms the baselines in terms of query performance. Further, the partial retraining mechanism achieves notable performance enhancement that is competitive to full retraining while achieving over $2 \times$ speedup compared to full retraining. Compared to the previously published paper [29], this paper introduces over $3 5 \%$ new content. We extend the BMTree by incorporating a novel reconstruction mechanism that enables it to quickly adapt itself to distribution shifts and achieves better query performance, which is not supported by other SFC methods. We also include additional experiments to evaluate the proposed mechanism under different shift settings, including data shift, query shift, and their combination. # II. PROBLEM STATEMENT & PRELIMINARIES # A. Problem Definition Let $\mathcal { D }$ be a database, where each data point $\textbf { x } \in { \textit { D } }$ has $n$ dimensions, denoted by $\textbf { x } = ~ ( d _ { 1 } , d _ { 2 } , \ldots , d _ { n } )$ . For ease of presentation, we consider only 2-dimensional data points $\textbf { x } = \ ( x , y )$ , that can be easily extended to $n$ dimensions. x can be converted to bit strings as: $\mathrm { ~ \bf ~ x ~ } = $ $( ( x _ { 1 } x _ { 2 } \ldots x _ { m } ) _ { 2 } , ( y _ { 1 } y _ { 2 } \ldots y _ { m } ) _ { 2 } )$ . where each $x _ { i }$ , $y _ { j }$ $\mathrm { ~ ~ \cdot ~ } _ { 1 } \mathrm { ~ ~ \leq ~ }$ $i , j \leq m )$ are 0 or 1 (i.e., $x _ { i } , y _ { j } \in \{ 0 , 1 \} \quad$ ) and $m$ is the length of the bit string that is dependent on the cardinality of the dimensions $x$ and $y$ . Take ${ \bf x } = ( 4 , 5 )$ for example, it can be converted in base 2 to $\mathbf { x } = ( 1 0 0 _ { 2 } , 1 0 1 _ { 2 } )$ . In previous studies on SFC based multidimensional indexes, e.g., [1, 23, 30], values of data points are typically mapped to fine-grained grid cells for discretization. SFC maps $\mathbf { x }$ into a scalar value $v$ (called SFC value) with a mapping function $C ( \mathbf { x } ) v$ . An SFC value $v$ can be used as the key value of data $\mathbf { x }$ to determine the order of $\mathbf { x }$ in $\mathcal { D }$ . Problem $\boldsymbol { { \mathit { 1 } } }$ (SFC Design): Given a database $\mathcal { D }$ and a query workload $Q$ , we aim to develop a mapping function $T$ that maps each data point $\mathbf { x } \in \mathcal { D }$ into an SFC value $v$ , such that with an index structure (e.g., a ${ \bf B } ^ { + }$ -Tree) built on the SFC values of that data points in $\mathcal { D }$ , the query performance (e.g., I/O cost and querying time) on $Q$ is optimized. # B. Preliminaries on SFC We present two desired properties for a mapping function $T$ , namely Injection and Monotonicity. Then, we describe the curve design methods in the $Z$ -curve and Quilts that also satisfy these properties. Injection1. An SFC design is expected to satisfy the injection property that guarantees a unique mapping from $\mathbf { x }$ to $\boldsymbol { v }$ . This is to ensure that each SFC value $v$ can be used as a key value of $\mathbf { x }$ for ordering and indexing data. It is defined as follows. Definition $^ { 1 }$ (Injection): Given a function $C : { \textbf { x } } \to \upsilon$ , $C$ is injective if $\mathbf { x }$ maps to a unique value $v$ , s.t. $\forall \mathbf { x _ { 1 } } \ \neq$ $\mathbf { x _ { 2 } } , C ( \mathbf { x _ { 1 } } ) \neq C ( \mathbf { x _ { 2 } } )$ . The injection property is desirable for an index to narrow the search space for better query performance. Consider an extreme situation where all data points map to the same value. Then, an index based on the SFC values cannot narrow the search space for a query. Monotonicity. The monotonicity [25] is defined as follows. Definition 2 (Monotonicity): Given two $n$ -dimensional data points (denoted as $\mathbf { x } ^ { \prime }$ and $\mathbf { x } ^ { \prime \prime }$ ), whose SFC values are denoted by $C ( \mathbf { x } ^ { \prime } )$ and $C ( \mathbf { x } ^ { \prime \prime } )$ . When a mapping function $C$ holds monotonicity, if $d _ { i } ^ { \prime } \geq d _ { i } ^ { \prime \prime }$ is satisfied for $\forall i \in [ 1 , n ]$ , it always has $C ( \mathbf { x } ^ { \prime } ) \geq C ( \mathbf { x } ^ { \prime \prime } )$ . Maintaining monotonicity is a desirable property for mapping data points to SFC values as explained below. Assuming the origin of the space is at the lower left, given a 2-dimensional window query represented by its minimum (bottom-left corner) and maximum (top-right corner) points (i.e., ${ \bf q } _ { m i n } ~ = ~ ( x _ { m i n } , y _ { m i n } )$ , ${ \bf q } _ { m a x } \ = \ ( x _ { m a x } , y _ { m a x } ) )$ . Let $\begin{array} { r } \mathcal { P } \ = \ \{ ( x , y ) \ | \ x _ { m i n } \ \leq \ x \ \leq \ x _ { m a x } , y _ { m i n } \ \leq \ y \ \leq \ y _ { m a x } \} \end{array}$ denotes the query results bounded by the query window. If the monotonicity property holds, the result points in $\mathcal { P }$ are within the range bounded by the SFC values of ${ \bf q } _ { m i n }$ and $\mathbf { q } _ { m a x }$ . The reason is that for any data point $\mathbf { p } \in \mathcal { P }$ , whose SFC value $C ( \mathbf { p } )$ always holds that $C ( \mathbf { q } _ { m i n } ) \leq C ( \mathbf { p } ) \leq C ( \mathbf { q } _ { m a x } )$ . The property is desirable since it enables us to design simple and efficient algorithms for processing a window query by checking data points whose SFC values are within the bounded range only; Otherwise, the algorithm does not work. For example, the Hilbert curve and its variants [19, 20, 21] do not satisfy the monotonicity property, which makes it hard to identify the scanning range for a window query in the space of their SFC values, and requires maintaining additional structure to design more complicated algorithms [32]. Computing SFC values in the $\mathbf { Z }$ -curve [16, 17, 18] and in QUILTS [24]. Both Z-curve and QUILTS guarantee the injection and monotonicity properties. Figure 1 examplifies how the $Z \cdot$ -curve and QUILTS map a data point $\mathbf { x }$ to a scalar SFC value $v$ . The curve design in the $Z$ -curve and QUILTS are presented as follows. The SFC value of $\mathbf { x }$ in the $Z$ -curve is computed via bit interleaving, which generates a binary number consisting of bits (0 or 1) filled alternatively from each dimension’s bit string. The $Z$ -curve value of a 2-dimensional data point $\mathbf { x }$ is computed by Function $C _ { z }$ : $$ C _ { z } ( \mathbf { x } ) = ( x _ { 1 } y _ { 1 } x _ { 2 } y _ { 2 } \dots x _ { m } y _ { m } ) _ { 2 } $$ It assumes that all dimensions have the same bit-string length, and the zero-padding technique is usually applied to fit the length equally by padding zeros at the head of each bit string. QUILTS generalizes the bit interleaving pattern of the Zcurve to more general bit merging pattern, each of which represents a way of merging bits. For example, for twodimensional data, QUILTS defines a bit merging pattern as follows. Definition 3 (Bit Merging Pattern): A bit merging pattern (BMP) is a string $\mathtt { P }$ of length $2 m$ over the alphabet $\{ { \tt X } , { \tt Y } \}$ s.t. it contains exactly $m \mathrm { \nabla } \mathrm { X } ^ { \bullet } \mathrm { s }$ and $m$ Y’s. Given a BMP point $\mathtt { P } =$ $p _ { 1 } p _ { 2 } \ldots p _ { 2 m }$ , the SFC described by $\mathtt { P }$ is defined as follows. We set $$ C _ { \mathtt { P } } ( \mathbf { x } ) = ( b _ { 1 } b _ { 2 } \dots b _ { 2 m } ) _ { 2 } $$ according to the following rule: (1) Since $\mathtt { P }$ contains exactly $m \ X ^ { \prime } { \bf s }$ , we let $I = \{ i _ { 1 } , \ldots , i _ { m } \}$ be the list of ordered indices such that $p _ { i _ { \ell } } = \mathrm { X }$ . Then, we set $b _ { i _ { \ell } } = x _ { \ell }$ for $1 \leq \ell \leq m$ . (2) Similarly, for the value of $y$ , we consider $J = \{ j _ { 1 } , \dots , j _ { m } \}$ where $p _ { j _ { \ell } } = \mathtt { Y }$ , and assign $b _ { j _ { \ell } }$ the bit value of $y _ { \ell }$ . For example, given the BMP point $\mathsf { P } = \mathrm { X X Y Y }$ , the value of data point $\mathbf { x }$ computed by $C _ { \mathtt { P } }$ is $C _ { \mathtt { P } } ( \mathbf { x } ) = ( x _ { 1 } x _ { 2 } y _ { 1 } y _ { 2 } ) _ { 2 }$ . Notice that both $x$ and $y$ are subsequences of $T _ { \mathtt { P } } ( \mathbf { x } )$ . SFCs represented with different BMPs form an SFCs set. QUILTS considers this set and selects the optimal SFC evaluated on a given query workload as the output curve. We prove the monotonicity of SFCs with BMPs, which guarantees the monotonicity property of our method in Section VII. The detailed proof is given in [29]. Lemma 1 (Monotonicity of SFCs with BMPs): An SFC with a BMP achieves the monotonicity property. # III. MOTIVATION AND METHOD OVERVIEW # A. Motivations and Challenges Motivation 1: Piecewise SFC Design. QUILTS and earlier SFCs based on BMPs only use one BMP to compute SFC values for all data points, which may not perform well for query processing. Example 1: Figure 2 shows a $4 \times 4$ grid space, where the green and yellow dashed rectangles represent two window queries $Q _ { 1 }$ (horizontal) and $Q _ { 2 }$ (vertical), respectively. The red lines represent the ordering of grid cells w.r.t. three SFCs. For example, consider SFC-1, whose $\mathsf { P } _ { 1 } = \mathtt { X Y Y X }$ and the computed value for input $\mathbf { x } ~ = ~ ( ( x _ { 1 } x _ { 2 } ) _ { 2 } , ( y _ { 1 } y _ { 2 } ) _ { 2 } )$ is $C _ { \mathsf { P } _ { 1 } } ( \mathbf { x } ) \ : = \ : ( x _ { 1 } y _ { 1 } y _ { 2 } x _ { 2 } ) _ { 2 }$ . Notice that in SFC-1, $x _ { 1 }$ is put as the first bit in the combined bit string, and thus any data point with $x _ { 1 } = 0$ (that resides in the left half of Figure 2 (a)) will have a smaller mapped value than any data point with $x _ { 1 } = 1$ (that resides in the right side of Figure 2(a)). We label the grid ids based on the mapped values of grid cells computed by the SFC curves. As discussed in Section II-B, a typical algorithm first locates the grid ids on the minimum (bottom-left corner) and the maximum (top-right corner) points of a query window. Different SFCs will result in accesses of different grid cells for answering the two window queries $Q _ { 1 }$ and $Q _ { 2 }$ . For instance, with SFC-1, $Q _ { 1 }$ and $Q _ { 2 }$ need 2 and 3 grid scans, respectively. With SFC-2 $( \mathsf { P } _ { 2 } = \mathtt { X Y X Y } ,$ ), $Q _ { 1 }$ and $Q _ { 2 }$ need 3 and 2 grid scans, respectively. Detailed computation can be found in [29]. Further, with SFC-1, cells in window of $Q _ { 1 }$ are consecutive (cell 7 and 8 form a contiguous sequence, noted as 1 run in [19]), while cell 13 and 15 in $Q _ { 2 }$ are not consecutive (2 runs). Query with 1 run means a contiguous memory access is available, which is preferred. In the example, SFC-1 performs better for $Q _ { 1 }$ while SFC-2 is better for $Q _ { 2 }$ . A natural idea is whether we can combine the advantages from the two BMPs of SFC-1 and SFC-2, i.e., we use XYYX to organize the data at the left hand side and XYXY to organize the data at the right-hand side. The design will result in a piecewise SFC, shown as SFC-3 in Figure 2(c). With SFC-3, we need 2 grid scans for both $Q _ { 1 }$ and $Q _ { 2 }$ . This example motivates the need for designing a piecewise SFC. Fig. 2: Motivation for piecewise SFCs, SFC-1 is described by the BMP XYYX while SFC-2 by XYXY. In contrast, SFC-3 (ours) is described by two BMPs: left by XYYX and right by XYYX, where the green shade highlights the scanned grids. Motivation 2: Learning-based Method for Piecewise SFC Construction. Classic SFCs (Z-curve, Hilbert curve, etc.) are based on a single scheme and fail to utilize the underlying database instance to design the SFC. In contrast, QUILTS proposes to utilize the given database and query workload to evaluate and select an SFC from an SFC set in which each SFC is described by a BMP. However, QUILTS does not directly evaluate SFC w.r.t. query performance but uses instead heuristic rules to generate candidate SFCs. The heuristic rules will select BMPs such that the resulting grid cells intersecting with a query would be continuous in the curve order, and hence results in fewer grid scans. These heuristics only work for query workloads containing limited types of window queries (e.g., with the same aspect ratio), and are not effective under general situations (where more than one query type with different aspect ratios and region areas exist). Due to these limitations, it calls for more principled solutions to utilize database and query workloads for generating and selecting an SFC. Learning-based methods would be promising for this purpose. Motivation 3: Efficient Piecewise SFC Update. The distribution shift issue exists when maintaining data-driven learned indexes. When distribution shift happens, the performance of the learned modules can become suboptimal. A retraining procedure is preferred if the performance decreases to a certain degree. However, retraining the BMTree from scratch can be costly. Moreover, in cases where distribution shifts occur unevenly across subspaces (e.g., some subspaces with significant distribution shifts while others with mild shifts), not all BMPs require redesign. Two examples are given in Fig. 3 to illustrate the uneven shifts of data and query, respectively. In Fig. 3a, the data located in the left half of the space shifts from being uniformly distributed to being non-uniformly distributed. In Fig. 3b, the query located in the left half of the space has shifted not only the spatial distribution of the queries (considering the center point of each query rectangle) but also the categories of the queries (from Type 1 to Type 2 with different aspect ratios). These distribution shifts can lead to a decrease in performance of the SFC learned for the case of historical data and query distributions. The issue calls for a solution to provide an efficient way to update the BMTree w.r.t. distribution shifts. Additionally, it is preferable if the solution allows for the partial redesign of the BMTree while keeping a portion unchanged so that only the data points located in the retrained subspaces need to update their SFC values to maintain the corresponding indexes. Fig. 3: Examples of distribution shifts of both data and query workloads over different subspaces. Challenges. Piecewise SFC design gives rise to three challenges as discussed in the introduction section. (1) How to partition the space and design an effective BMP for each subspace? The piecewise SFC design needs to consider both space partitioning and BMP generation. (2) How to design piecewise SFCs such that the two desirable properties, namely, monotonicity and injection, still hold? (3) How to design a data-driven approach to automatically build the BMTree, given a database and query workloads? (4) How to identify the appropriate subspace for the partial retraining of the piecewise SFC without compromising its properties? # B. Overview of the Proposed BMTree The Bit Merging Tree (BMTree) for Piecewise SFC Design. To address the first challenge, we propose a novel way of seamlessly integrating subspace partitioning and BMP generation by constructing the BMTree; a binary tree that models a piecewise SFC. Each node of the BMTree is filled with a bit from a dimension. The filled bit partitions the space into two subspaces corresponding to two child nodes. The left branch is the subspace where data points have a bit value of 0 and the right branch with 1. The BMTtree partitions the whole data space into subspaces, each corresponding to a leaf node with its BMP being the concatenated bit sequence from the root to the leaf node. We present the BMTree structure in Section IV. Furthermore, the BMTree mechanism guarantees that the generated piecewise SFC satisfies the two properties that address the second challenge. We prove that the piecewise SFC represented by a BMTree satisfies both monotonicity and injection in Section VII. RL-based Algorithm for Constructing a BMTree. To address the third challenge, we design a learning-based method that learns from data and query workloads to build the BMTree. We model the building of the BMTree as a Markov Decision Process [26]. The process of building a BMTree comprises a sequence of actions to select bits for tree nodes by following a top-down order. To learn an effective policy for building the BMTree, we propose a new approach for integrating a greedy policy into the Monte Carlo Tree Search (MCTS) framework [27]. Specifically, we develop a greedy policy that selects an action to fill a bit for each node during tree construction. For each node, the greedy policy chooses the bit that achieves the most significant reward among all the candidate bits. Afterwards, we apply the greedy policy as a guidance policy and use MCTS to optimize the BMTree with the objective of providing good query performance and avoiding local optima. Moreover, we introduce a fast computing metric, termed ScanRange, to speedup reward generation. We present the proposed solution in Section V and its time complexity analysis in Section VII. Partially Retraining a BMTree for Piecewise SFC Update. To address the last challenge, we develop a mechanism to efficiently update the BMTree. First, we propose a principal way that measures the shift of the subspace modeled by each BMTree node on query and data. A shift score measuring the distribution shift degree is introduced based on Jensen–Shannon (JS) divergence [33] that is a widely used tool for measuring the similarity between distributions. Then, the mechanism detects the nodes with largest performance optimization potentiality. Then, we partially delete the nodes of the BMTree that need to be retrained, and apply an adapted RL framework to regenerate the deleted BMTree parts with respect to the updated database scenario. The structure of the BMTree ensures that the regenerated piecewise SFC retrains all desired properties, since the regenerated BMTree naturally models a piecewise SFC properly. We present the details of the retraining mechanism in Section VI. # IV. DESIGNING PIECEWISE SFC: BIT MERGING TREE (BMTREE) We present how to develop a piecewise SFC, modeled by the Bit Merging Tree (BMTree) that is a binary tree. Designing a BMP. To design a BMP P, we need to decide which character (X or Y in the two-dimensional case) is filled in each position of P. A left-to-right design procedure decides the filling characters in the order from $p _ { 1 }$ to $p _ { 2 m }$ . The key to BMP design is to have a policy deciding which dimension (X or Y) to fill into each position of P. Designing piecewise SFC with multiple BMPs. Next, we discuss piecewise SFC design. As discussed in Section III-A, one challenge in designing a piecewise SFC is: How to handle two subtasks that are intertwined together, namely subspace partitioning and BMP design within each subspace? It is also challenging to guarantee that the piecewise SFC comprising different BMPs for different subspaces still satisfies both the injection and monotonicity properties. To address these challenges, we introduce a new solution to simultaneously generate the subspaces and design the BMPs for these subspaces. We follow a left-to-right BMP design approach, and start with an empty string P. For example, if we fill $\mathrm { \Delta } \mathrm { X }$ in the first position of P, Bit $x _ { 1 }$ will be filled to the $b _ { 1 }$ th position of P; Then, the whole data space is partitioned into two subspaces w.r.t. the value of Bit $x _ { 1 }$ , where one subspace corresponds to $x _ { 1 } = 0$ and the other corresponds to $x _ { 1 } = 1$ . This partitioning enables us to separately design different BMPs for the two subspaces. Notice that the BMPs for each subspace will share X as the first character, but can have distinct filling choices for the next $2 m - 1$ characters. By recursively repeating this operation, we fill in the subsequent characters for each BMP for each subspace, thus generating multiple subspaces each with a different BMP. One advantage of this approach is that it seamlessly integrates subspace partitioning with BMP generation. Fig. 4: (a) An example of a piecewise SFC that comprises two BMPs $\mathsf { P } _ { 1 }$ and $\mathsf { P } _ { 2 }$ for computing values of Data Points a and b. (b) A BMTree that combines the two BMPs. Example 2: Figure 4a gives an example of a piecewise SFC, where Dimensions $x$ and $y$ are bit strings of Length 2. First, X is selected, and then the whole space is partitioned w.r.t. value of Bit $x _ { 1 }$ into two subspaces where Subspace $S _ { 1 }$ corresponds to $x _ { 1 } = 0$ and Subspace $S _ { 2 }$ corresponds to $x _ { 1 } = 1$ . Next, we separately design BMPs for $S _ { 1 }$ and $S _ { 2 }$ , where all BMPs under $S _ { 1 }$ share the first bit $x _ { 1 } ~ = ~ 0$ and BMPs under $S _ { 2 }$ share the first bit $x _ { 1 } ~ = ~ 1$ . We generate two example BMPs: ${ \sf P } _ { 1 } ~ = ~ \mathrm { { X Y X Y } }$ for $S _ { 1 }$ and $\mathsf { P } _ { 2 } ~ = ~ \mathrm { X X Y Y }$ for $S _ { 2 }$ . Finally, we get a piecewise SFC that comprises $C _ { \mathsf { P 1 } }$ for $S _ { 1 }$ and $C _ { \mathsf { P } _ { 2 } }$ for $S _ { 2 }$ . This piecewise SFC represents the function: $C ( \mathbf { x } ) = \left\{ \begin{array} { l l } { ( x _ { 1 } y _ { 1 } x _ { 2 } y _ { 2 } ) _ { 2 } } \\ { ( x _ { 1 } x _ { 2 } y _ { 1 } y _ { 2 } ) _ { 2 } } \end{array} \right.$ if $x _ { 1 } = 0$ . Therefore, if if $x _ { 1 } = 1$ Data Point a is located in $S _ { 1 }$ , we apply $C _ { \mathsf { P } _ { 1 } }$ to compute a’s SFC value. Similarly, if Data Point $\mathbf { b }$ is in $S _ { 2 }$ , we apply $C _ { \mathsf { P } _ { 2 } }$ to compute b’s SFC value. To facilitate the process of designing piecewise SFCs, we propose the Bit Merging Tree (BMTree) structure that is used to simultaneously partition the space and to generate the BMPs. Figure 4b gives the corresponding BMTree for the example piecewise SFC of Figure 4a. Since the example piecewise SFC is developed with only 2 BMPs, the left subtree of the root node shares $\mathsf { P } _ { 1 }$ while the right subtree shares $\mathsf { P } _ { 2 }$ . Next, we present the BMTree. The Bit Merging Tree (BMTree). The BMTree is a binary tree that models a piecewise SFC $C _ { \mathrm { T } }$ , and is denoted by $\intercal$ . The depth of a BMTree T equals the length of a BMP, and is denoted by $2 m$ for the 2-dimensional space. Every node of T corresponds to a bit of $x _ { i }$ or $y _ { i }$ , $1 \leq i \leq m$ . The left (resp. right) child denotes the subspace with bit value 0 (resp. 1). Each path from the root node to a leaf node represents a BMP for the subspace of the leaf node, which is the concatenation of all the bits of the nodes in the path from root to leaf. The SFC value $C _ { \mathrm { T } } ( \mathbf { x } )$ of a data point $\mathbf { x }$ is computed by traversing a path of $\mathtt { T }$ as follows. We start from the root node, and for each traversed node, denoted by $x _ { i }$ , if $x _ { i } = 0$ , we visit the left child node; otherwise we visit the right child. When we reach a leaf node, the corresponding BMP of the traversed path is used to compute $C _ { \mathrm { T } } ( \mathbf { x } )$ . The green path in Figure 4b is the path traversed for Point a that represents BMP $\mathsf { P } _ { 1 }$ while the blue path traversed for $\mathbf { b }$ represents $\mathsf { P } _ { 2 }$ . To construct a BMTree, we develop a breadth-first construction algorithm to assign bits to the BMTree’s nodes. Details (pseudo-code and the corresponding illustration) can be found in [29]. # V. LEARNING PIECEWISE SFC: MCTS-BASED BMTREE CONSTRUCTION It is difficult to design heuristic methods to construct the BMTree to optimize the querying performance for a workload on a database instance. This could be observed from QUILTS that uses heuristic rules for a workload containing specific types of window queries only, and fails to directly optimize query performance. In contrast, we propose a reinforcement learning (RL) based method for learning a decision policy that builds the BMTree to optimize query performance directly. To allow an RL policy to construct the BMTree, we model the BMTree construction as a Markov decision process (MDP, for short). Then, we design a BMTree construction framework with a model-based RL method, termed Monte Carlo Tree Search (MCTS, for short). Unlike traditional algorithms, e.g., greedy or $\mathbf { A } ^ { * }$ , MCTS is an RL approach that demonstrates superior exploration-exploitation balance, mitigating the issue of local optimum. MCTS is well-suited for the problem at hand, and offers stable performance without extensive parameter tuning, compared with other RL algorithms, e.g., PPO [34]. Figure 5 gives the workflow of the MCTS-based BMTree construction framework. We define one action that RL takes to be a series of bits that fill a level of nodes in the BMTree, and the nodes of the next level are then generated. The action space size grows exponentially with the number of nodes. It becomes difficult for RL to learn a good policy with an enormously large action space. To address this issue, we design a greedy action selection algorithm that helps guide MCTS to search for good actions. Moreover, we design a metric, termed ScanRange to speed up reward computing. BMTree Construction as a Decision Making Process. We proceed to illustrate how we model BMTree construction as a MDP, including the design of the states, actions, transitions and reward. States. Each partially constructed BMTree structure $\intercal$ is represented by a state to map each tree with its corresponding query performance. The state of a BMTree is represented by the bits filled to the BMTree’s nodes. For example, in Figure 5, the current (partially constructed) BMTree’s state is represented as $\boldsymbol { \mathrm { T } } = \{ ( 1 : \underline { { \mathbf { X } } } ) , ( 2 : \underline { { \mathbf { X } } } \underline { { \mathbf { Y } } } ) \}$ , where $\underline { { \texttt { X } } }$ and XY are bits filled to the nodes in Levels 1 and 2. Actions. Consider a partially constructed BMTree T that currently has $N$ nodes to be filled. We define the actions as filling bits to these nodes. We aim to learn a policy that decides which bit to be filled for each node. Furthermore, the policy decides if the BMTree will split the subspace of one tree node. If the policy decides to split, the tree node will generate two child nodes based on the filled bit $b$ , and the action is denoted by b (with an underline). Otherwise, the tree node only generates one child node that corresponds to the same subspace as its parent, the action is denoted by b. During the construction, the policy will assign bits to all $N$ nodes. The action is represented by $A = \{ a _ { 1 } , \ldots , a _ { N } \} , a _ { i } = \left( \mathfrak { b } _ { i } , s p _ { i } \right)$ , where $\mathsf { b } _ { i }$ denotes the bit for filling Node $n _ { i }$ , and $s p _ { i }$ denotes whether to split the subspace. Given $\intercal$ with $N$ nodes to be filled, the action space size is $( 2 n ) ^ { N }$ where $n$ is the dimension number, and a factor of 2 comes from the decision of whether or not to split the subspace. Fig. 5: Workflow of Monte Carlo Tree Search-Based BMTree construction. Transition. With the selected action $A$ for the unfilled nodes in T, the framework will construct $\boldsymbol { \mathrm { \Pi } } _ { \mathrm { { T } } }$ based on $A$ . The transition is from the current partially constructed BMTree $\mathtt { T }$ to the newly constructed tree $\boldsymbol { \mathrm { T ^ { \prime } } }$ , denoted by $\mathrm { T } ^ { \prime } T r a n s i t i o n ( \mathrm { T } , A )$ . In our framework, we start from an empty tree, and construct the BMTree level at a time during the decision process. In each iteration, the action generated by the policy will fill one level of BMTree nodes (starting from Level 1), and will generate nodes one level deeper. Rewards Design. After T evolves into $\boldsymbol { \mathrm { T ^ { \prime } } }$ , we design the reward that reflects the expected query performance of $\boldsymbol { \mathrm { T ^ { \prime } } }$ to evaluate the goodness of Action $A$ . One might consider executing queries using the corresponding BMTree to see how well the SFC helps decrease I/O cost. However, this is time-consuming. Thus, we propose a metric, that we term ScanRange (SR) that reflects the performance of executing a window query, and can be computed efficiently. We construct the reward based on $\boldsymbol { \mathrm { T ^ { \prime } } }$ ’s SR. We define the function $S R _ { \mathrm { T } } ( q , \mathcal { D } )$ as taking a query $q$ and a dataset $\mathcal { D }$ as input, and outputs the ScanRange of $q$ over $\mathcal { D }$ . Efficient Reward Computing. $S R$ is computed as follows. Given a BMTree T, we randomly sample data points from $\mathcal { D }$ with a sampling rate $r _ { s }$ . Then, the sampled data points are sorted according to their SFC values. To compute the SFC values on a partially constructed BMTree, we apply a policy extended from the $Z$ -curve to the unfilled portions of the BMP in each subspace. Sorted data points are then evenly partitioned into $\frac { r _ { s } | \mathcal { D } | } { \left| B \right| _ { - } }$ blocks, where $| B |$ denotes the number of points per block. For a given Window Query $q$ represented by its minimum point ${ \bf q } _ { m i n }$ and maximum point $\mathbf { q } _ { m a x }$ , we calculate the SFC value of the minimum (resp. maximum) point as $v _ { m i n } ~ = ~ C _ { \mathrm { T } } ( \mathbf { q } _ { m i n } )$ (resp. $v _ { m a x } = C _ { \mathrm { T } } ( \mathbf { q } _ { m a x } ) )$ . We denote the blocks that ${ { v } _ { m i n } }$ and $v _ { m a x }$ fall into by $I D _ { m i n }$ and $I D _ { m a x }$ , respectively. We calculate $q$ ’s $S R$ given $\mathtt { T }$ and $\mathcal { D }$ as $S R _ { \mathsf { T } } ( q , \mathcal { D } ) = I \cal D _ { m a x } - I \cal D _ { m i n }$ . The calculation of SR is way cheaper than actually evaluating Query $q$ . We develop a reward generator based on the evaluated $S R$ . We take the performance of the $Z$ -curve as a baseline. Given the dataset $\mathcal { D }$ and a query workload $\mathcal { Q }$ . The generator sorts the data points based on their SFC values, and computes the reward as: $$ R e w = \sum _ { q \in \mathcal { Q } } \left( S R _ { \mathrm { Z } } ( q , \mathcal { D } ) - S R _ { \mathrm { T } } ( q , \mathcal { D } ) \right) $$ Intuitively, the reward is positive if the BMTree constructed by the policy achieves a lower SR than the $Z$ -curve. This design allows the agent to assess the actual performance of the constructed BMTree compared to $Z$ -curve. Empirical studies show that this choice helps the agent efficiently identify good actions. We normalize the reward by dividing the reward of the $Z$ -curve. Example 3: Refer to Figure 5. The partially constructed BMTree is represented by the bits filled to different levels, denoted by $\boldsymbol { \mathrm { T } } = \{ ( 1 : \underline { { \boldsymbol { \mathrm { X } } } } ) , ( 2 : \underline { { \boldsymbol { \mathrm { X } } } } \underline { { \boldsymbol { \mathrm { Y } } } } ) \}$ , where each tuple is the bits filled in the corresponding BMTree level. The learned policy selects the action $A = \tt X \underline { { Y } } \tt Y X$ . The next level of the BMTree is constructed based on $A$ . The reward signal is computed based on the performance of the BMTree’s newly added level. The BMtree will continue to be provided as input for constructing its next level. We proceed to present the proposed MCTS framework, including a BMTree T under construction, a policy tree that helps to decide the action and gets updated gradually, and a reward generator that generates the reward based on T. Policy Tree. MCTS [27, 35] is a model-based RL method. The high-level idea of MCTS is to search in a tree structure, where each node of the tree structure denotes a state. Given the current state, the objective of MCTS is to find the optimal child node (i.e., the next state) that potentially achieves an optimal reward. The structure that records and updates the historical states and their associated rewards is named policy tree [35], where each child node denotes a next possible action (the design choice for the next level of the BMTree), and we define it as follows: Definition 4 (Policy Tree): The policy tree is a tree structure that models the environment. Each node of the policy tree corresponds to a state, representing a current partially constructed BMTree. Moreover, every node stores: (1) Action $A$ that constructs one more level of the BMTree, and (2) A reward value that reflects the goodness for choosing a node. The root node of the policy tree corresponds to an empty BMTree, and each path of the policy tree from the root node to the leaf node corresponds to a decision procedure of constructing a BMTree. The middle section of Fig. 5 illustrates an example of policy tree. Rollouts. To choose an action, MCTS checks the reward that different action choices can achieve. To achieve this, MCTS will make several attempts in which it simulates several paths in the policy tree, and then checks if the attempted path results in better performance. Then, MCTS updates the policy tree based on the simulations, referred to as rollout in [27], indicating the operation that involves repeatedly selecting different actions and ultimately choosing the optimal one. A rollout consists of four phases: (1) Selection that selects the attempted path corresponding to a BMTree construction procedure, (2) Expansion that adds the unobserved state node to the policy tree, (3) Simulation that tests the selection’s query performance, and (4) Backpropagation that updates the reward value. We proceed to present the design of each of these four steps. (1) Selection. The selection step aims to select a path in the policy tree that potentially achieves good performance. Starting from the current state $\mathbf { S } _ { t }$ with the initialized path: $\mathsf { P a t h } = \{ \mathsf { S } _ { t } \}$ , we first check if all child nodes have been observed in the previous rollouts. If there are unobserved nodes, we choose one of them and add it to the path. Otherwise, we apply the Upper Confidence bounds applied to Trees (UCT) action selection algorithm [36] to select a child node that balances exploration vs. exploitation. Specifically, UCT selects the child node with the maximum value $v _ { u c t } =$ $\begin{array} { r } { \frac { \nabla _ { t + 1 } } { n u m ( \mathbf { S } _ { t + 1 } ) } + c \cdot \sqrt { \frac { \ln \left( n u m ( \mathbf { S } _ { t } ) \right) } { n u m ( \mathbf { S } _ { t + 1 } ) } } } \end{array}$ $\mathbf { S } _ { t + 1 } \ = \ T r a n s i t i o n ( \mathbf { S } _ { t } , A )$ $\mathtt { V } _ { t + 1 }$ $\mathsf { S } _ { t + 1 }$ from $\mathsf { S } _ { t }$ by Action $A$ ; $n u m ( \mathbf { S } _ { t + 1 } )$ and $n u m ( \mathbf { S } _ { t } )$ denote the times of observing Nodes $\mathsf { S } _ { t + 1 }$ and $\mathsf { S } _ { t }$ , respectively, during rollouts; $c$ is a factor defaulted by 1. Then, the selected node $\mathbf { S } _ { t + 1 }$ is added to the path: $\mathrm { P a t h } = \{ \mathbf { S } _ { t } \to \mathbf { S } _ { t + 1 } \} .$ . The selection step continues until the last node of Path is an unobserved node (the policy does not know the expected value of this node). Then, it returns the Path for the next step. (2) Expansion. In the expansion step, the unobserved nodes in Path are added to the policy tree. The number of observations of each node $\mathsf { S } _ { t }$ in Path, denoted as $n u m ( \mathbf { S } _ { t } )$ , is incremented by 1. Then, this value is used to compute the average reward. (3) Simulation. We simulate the performance of the selected Path by constructing the BMTree based on the actions stored in the nodes of the path. Then, the constructed BMTree is input to the reward generator to compute the tree’s SR metric. (4) Backpropagation. In this step, we update the value of each node in Path. We apply the maximum value update rule that updates the value of a state $\mathsf { S } _ { t }$ with the maximum reward it gains from simulation, computed by $\mathrm { V } _ { t } ^ { \prime } = \operatorname* { m a x } ( \mathrm { V } _ { t } , R e w )$ , where $\nabla _ { t }$ is State $\mathbf { S } _ { t }$ ’s old value, Rew is the reward gained during the simulation, and $\nabla _ { t } ^ { \prime }$ is the updated value. Example 4: Refer to Figure 5. State $\mathrm { S } _ { 3 }$ corresponds to the input partially constructed BMTree. During the rollouts, we select Path $\{ \mathrm { S } _ { 3 } \mathrm { S } _ { 6 } \mathrm { S } _ { 1 0 } \}$ in the selection step. Then, it expands the new observed State $S _ { 8 }$ to the policy tree in the expansion step. We construct the BMTree based on the selected path and compute SR. In the backpropagation step, the values of $S _ { 3 } , \ S _ { 6 }$ and $S _ { 1 0 }$ are updated whose values are listed in red, based on the computed SR in $\mathrm { S } _ { 1 0 }$ . After the rollouts procedure, the algorithm selects the action with the highest reward, and then BMTree $\intercal$ is constructed accordingly. In the example, ${ \sf S } _ { 6 }$ is selected with the largest value $\mathtt { V } _ { 6 } ^ { \prime }$ compared to the other child nodes. Then, it returns Action XYYX to build the BMTree one level deeper. Greedy Action Selection. We design the greedy action selection algorithm (GAS, for short) for the selection step in rollouts for MCTS to find a good action for a partially constructed BMTree. Given $\mathtt { T }$ with $N$ nodes to be filled, GAS generates an action $A _ { g }$ by greedily assigning a bit to each BMTree node that achieves the minimum SR compared to other bits when $\mathtt { T }$ is filled with that bit. To summarize, the MCTS-based BMTree construction has two steps: BMTree initialization and MCTS-based BMTree learning. Its detailed pseudo-code can be found in [29]. # VI. UPDATING PIECEWISE SFC In this section, we proceed to handle BMTree maintenance. As described in Section III-A, with the change in distribution of the data and/or query workload, the performance of the BMTree is no longer optimal. Retraining the whole BMTree from scratch would be inefficient and would consume resources. To achieve efficient piecewise SFC update, we design a mechanism that partially retrains a BMTree based on the pretrained BMTree instead of fully retraining a BMTree from scratch, while notably improving query performance under the new data and query distributions. We proceed to introduce: (1) Measurement of the degree of distribution shift that determines whether the BMTree nodes should be retrained; (2) Detection of which BMTree nodes to be retrained that identifies the nodes to be retrained for an optimal effectivenessefficiency trade-off; and (3) Partial BMTree retraining that enables partial retraining of the selected BMTree nodes while maintaining the rest of the BMTree unchanged. # A. Assessment of the Distribution Shift Partial retraining of a piecewise SFC requires retraining a portion of the piecewise SFC while maintaining the overall structure still a piecewise SFC. Furthermore, detecting the subspace that could improve query performance after being trained is non-trivial. To address these issues, we follow the pre-designed BMTree structure modeling, where we split the domain w.r.t. the structure of the BMTree. As in Section IV, different nodes of BMTree represent different subspaces. To achieve effective and efficient detection of the retraining subspace, we model the degree by which data and query distributions drift within the BMTree structure as follows. Fig. 6: Measuring distribution shifts within the BMTree. Let $\tt { N }$ be a BMTree node that represents a subspace of the whole data space domain. Suppose an action is applied to $\tt { N }$ , and $\tt { N }$ is split into the two child nodes $\mathbb { N } _ { 1 }$ and ${ \tt N } _ { 2 }$ , where each child node denotes half of the original subspace. With actions further assigned to ${ \tt N } _ { 1 }$ and ${ \tt N } _ { 2 }$ , the grandchild nodes of $\tt { N }$ split from ${ \tt N } _ { 1 }$ and ${ \tt N } _ { 2 }$ (4 grandchild nodes if ${ \tt N } _ { 1 }$ and ${ \tt N } _ { 2 }$ are all split) denote more fine-grained subspaces, respectively. We measure the distribution difference of the subspace denoted by N before and after the data or query is updated. We model the data and query distribution shifts of Node $\tt { N }$ as follows: Modeling Data Shift. Suppose the split level is set to 2 that denotes that the data shift of a BMTree node $\tt { N }$ is computed w.r.t. the granchild nodes 2 levels deeper than $\tt { N }$ (4 grandchild nodes by default). The historical and updated datasets are denoted as $\mathcal { D } _ { o }$ and $\mathcal { D } _ { u }$ , respectively. Refer to Fig. 6a. The data points are split w.r.t. the nodes. On the left side (resp. right side) of the figure, the $\mathcal { D } _ { o }$ (resp. $\mathcal { D } _ { u } \mathrm { ~ , ~ }$ ) is split into four parts at Level 2 in the subtree, and we represent the distribution tahs $\begin{array} { r } { l _ { o } ^ { d } = [ { \frac { 7 } { 2 2 } } , { \frac { 3 } { 2 2 } } , { \frac { 6 } { 2 2 } } , { \frac { 6 } { 2 2 } } ] } \end{array}$ (zredspb.y $\begin{array} { r } { l _ { u } ^ { d } = \hat { [ \frac { 5 } { 2 2 } , \frac { 6 } { 2 2 } , \frac { 8 } { 2 2 } , \frac { 3 } { 2 2 } ] } ) } \end{array}$ dawtahserte. Note that the representation of the data will change if the action assigned to the sub-tree is different. Then, we apply the Jensen–Shannon (JS) divergence [33] to measure the data shift, defined as follows: $$ s h i f t _ { d } \triangleq D _ { \mathrm { J S } } \left( l _ { o } ^ { d } \middle | \middle | l _ { u } ^ { d } \right) = \frac { 1 } { 2 } \Big ( D _ { \mathrm { K L } } \left( l _ { o } ^ { d } \middle | \middle | l _ { m i x } ^ { d } \right) + D _ { \mathrm { K L } } \left( l _ { u } ^ { d } \middle | \middle | l _ { m i x } ^ { d } \right) \Big ) , $$ where $D _ { \mathrm { K L } }$ denotes the Kullback–Leibler (KL) divergence function, defined as $\begin{array} { r } { D _ { \mathrm { K L } } ( l _ { o } ^ { d } \vert \vert l _ { u } ^ { d } ) = \sum _ { i } l _ { o } ^ { d } [ i ] \cdot \log \left( l _ { o } ^ { d } [ i ] / l _ { u } ^ { d } [ i ] \right) } \end{array}$ . $l _ { m i x } ^ { d }$ denotes the mixed di⃓s⃓tribution∑︁of $l _ { o } ^ { d }$ and $l _ { u } ^ { d }$ : $l _ { m i x } ^ { d } =$ $\textstyle { \frac { 1 } { 2 } } { \bigl ( } l _ { o } ^ { d } + l _ { u } ^ { d } { \bigr ) }$ . The larger the KL divergence $D _ { \mathrm { J S } } ( l _ { o } ^ { d } | | l _ { u } ^ { d } )$ is, the greater the difference between $\mathcal { D } _ { o }$ and $\mathcal { D } _ { u }$ . Modeling Query Shift. For the query shift, we split the query set w.r.t. the BMTree nodes. Specifically, we compute the center point of each query, and divide the query set according to the center point, e.g., if the query is denoted by: $( x _ { m i n } , y _ { m i n } , x _ { m a x } , y _ { m a x } )$ , the center point is computed as: ( xmin+xmax , ymin+ymax ). Then, according to the grandchild nodes of $\mathbb { N }$ , the queries are split into 4 parts. Refer to Fig. 6b. On the left side of the figure, the old queryset $\mathcal { Q } _ { o } = \{ q _ { 1 } , q _ { 2 } \}$ is split into a list of four subsets $l _ { o } ^ { q } = \{ \{ q _ { 1 } \} , \{ \} , \{ q _ { 2 } \} , \{ \} \}$ , while on right side the updated queryset $\mathcal { Q } _ { u } = \{ q _ { 1 } , q _ { 2 } , q _ { 3 } \}$ is split as $l _ { u } ^ { q } = \{ \{ \} , \{ q _ { 2 } , q _ { 3 } \} , \{ q _ { 1 } \} , \{ \} \}$ . Unlike the measure of data shift that directly compares the number of data points of each list element, we cluster the queries in each node into different clusters w.r.t. the area and the aspect ratio, and thus compute the JS divergence of each list element. Then, the JS divergence is averaged across the list elements: $$ s h i f t _ { q } \triangleq \frac { 1 } { | l _ { o } ^ { q } | } \sum _ { i } D _ { \Im } \left( l _ { o } ^ { q } [ i ] { \left| \begin{array} { l } { l } { l _ { u } ^ { q } [ i ] } \end{array} \right| } | l _ { u } ^ { q } [ i ] \right) $$ After modeling the distribution shifts of both data and queries, next we introduce how to decide the subspaces to be retrained. When the retraining procedure begins, the method will recursively compute the data and query shifts of nodes in the BMTree T under a Breadth-First Search (BFS) order. We restrict the distribution shift computation to a limited depth of nodes in T, since the nodes with larger depth represent small data subspaces, and will contribute limited improvement in performance. When both data and query are shifted, the shift scores of data and query are weight summed as the final shift score: $s h i f t _ { m } = \alpha \cdot s h i f t _ { d } + ( 1 - \alpha ) \cdot s h i f t _ { q }$ , where $\alpha$ is the weight parameter and is set by default to 0.5. Then, the nodes to be retrained are filtered based on this score. A shift threshold $\theta _ { s }$ is set to filter the BMTree nodes. Nodes with shift score lower than $\theta _ { s }$ are not retrained. # B. Deciding Which BMTree Nodes to Partially Retrain Observe that the performance optimization potential of each node does not solely depend on the distribution shift degree. Instead, we propose to introduce a score based on the change in average ScanRange $( S R _ { \mathbb { T } } )$ before and after the data and/or queries change, to measure fast the possible optimization potentials when retraining a node. The optimization potentials score OP on Node $\tt N$ is computed based on $S R$ as follows: $$ \begin{array} { r l } & { \mathsf { O P } ( \mathbb N , \mathcal D _ { o } , \mathcal D _ { u } , \mathcal Q _ { o } ^ { \prime } , \mathcal Q _ { u } ^ { \prime } , \mathbb T ) = \underset { q _ { u } \in \mathcal Q _ { u } ^ { \prime } } { \arg } S R _ { \mathbb T } ( q _ { u } , \mathcal D _ { u } ) } \\ & { \qquad - \underset { q _ { o } \in \mathcal Q _ { o } ^ { \prime } } { \arg } S R _ { \mathbb T } ( q _ { o } , \mathcal D _ { o } ) , } \end{array} $$ where $\mathcal { Q } _ { o } ^ { \prime }$ (resp. $\mathcal { Q } _ { u } ^ { \prime } \mathrm { ~ . ~ }$ ) denotes the subset of historical (resp. updated) query workloads that the BMTree node $\tt { N }$ contains. avg $S R _ { \mathrm { T } } ( q _ { u } )$ and avg $S R _ { \mathrm { T } } ( q _ { o } )$ denote the average $S R$ of $q _ { u } \in \bar { \mathcal { Q } } _ { u } ^ { \prime }$ $q _ { o } \in \bar { \mathcal { Q } } _ { o } ^ { \prime }$ $\mathcal { Q } _ { o } ^ { \prime }$ and $\mathcal { Q } _ { u } ^ { \prime }$ on BMTree T, respectively. Then, we compare the filtered BMTree nodes level at a time, and select the nodes with maximum OP score as the node to be retrained. During retraining, to ensure a certain degree of efficiency improvement, a retraining constraint ratio $R _ { r c }$ is set to limit the retraining area of the retrained subspaces compared to a full retrain (e.g., if $r = 0 . 5$ , the accumulated area of the retrained subspaces should not reach half the whole space). The algorithm for detecting the BMTree nodes that need retraining is Listed in Algorithm 1. First, it initializes a queue $\mathcal { N }$ with the root node of $\intercal$ (Line 1). Then, the algorithm processes level at a time of the BMTree (Lines 3 – 14). The leftmost node of $\mathcal { N }$ is popped (Line 4), then the shift score $s$ is computed by the ShiftScore function (with information about the dataset update and the BMTree given, as described before) (Line 5). Then, the nodes satisfying the threshold $\theta$ are added to $\mathcal { L }$ , and the OP of $\tt N$ is computed w.r.t. Eq. 6 (Lines 6 Algorithm 1: Deciding on Which BMTree Nodes to Retrain. Algorithm 2: BMTree Structure Partial Retraining. 15 return $^ { - 7 ) }$ . If a level of BMTree is evaluated (Line 9), the algorithm sorts $\mathcal { L }$ w.r.t. $\mathsf { D P } _ { \mathtt { N } }$ (Line 10). Then, if the nodes with greater OS score satisfy the retraining constraint ratio $R _ { r c }$ (Line 12), they are added to the retraining nodes list (Line 13). # C. BMTree Reconstruction and Retraining We proceed to introduce the BMTree reconstruction and retraining procedures. First, we initialize the BMTree w.r.t. the pre-trained BMTree and the BMTree nodes needing retraining that have resulted from the retraining detection procedure. When the retrain domain (i.e., the to-be-retrained BMTree nodes) is decided, how to retrain the piecewise SFC while maintaining the rest of the designed piecewise SFC portion unchanged is non-trivial. Revisiting the design of seamless partitioning and BMP generation introduced in Section IV, we propose to partially maintain the BMTree structure, and conduct the retraining procedure. Fig. 7: Partial BMTree retraining procedure. To conduct the retraining procedure, first, we manipulate the BMTree so that the partial retraining procedure could be finished by regenerating the BMTree structure. As in Fig. 7, suppose $\mathbb { N } _ { 1 }$ and ${ \tt N } _ { 2 }$ are two nodes that represent the subspaces to be retrained. In the initialization of the retraining procedure, we delete the child nodes of $\mathbb { N } _ { 1 }$ and ${ \tt N } _ { 2 }$ as well as the actions assigned to $\mathbb { N } _ { 1 }$ and ${ \tt N } _ { 2 }$ while the other nodes remain unchanged. The BMTree’s unchanged portion is in the middle of Fig. 7. Then, we input the partially deleted BMTree $\intercal$ into an RL environment for retraining. We apply the MCTS method for retraining. Different from the environment introduced for training the BMTree from scratch (Section V), here the environment is developed to support partially training the BMTree, and is designed as follows: (1) State. We design the state of the retraining RL environment as the nodes to be retrained. In Fig. 7, $\mathbb { N } _ { 1 }$ and ${ \tt N } _ { 2 }$ are initialized in the state: $\mathbf { S } = \{ ( \mathtt { N } _ { 1 } : \mathtt { N o n e } ) , ( \mathtt { N } _ { 2 } : \mathtt { N o n e } ) \}$ , where None denotes that no action has been taken yet. (2) Action. Then, we design the action space as the actions assigned to $\mathbb { N } _ { 1 }$ and ${ \tt N } _ { 2 }$ . After the action is decided, the child nodes of ${ \tt N } _ { 1 }$ and ${ \tt N } _ { 2 }$ are generated, and the child nodes of $\mathbb { N } _ { 1 }$ and ${ \tt N } _ { 2 }$ represent the transited state. The nodes of $\boldsymbol { \mathrm { \Pi } } _ { \mathrm { { T } } }$ in one state do not require to have identical depth. (3) Reward Design. We apply the updated dataset and query workloads to generate the reward as in Section V. The RL policy training environment will produce a regenerated BMTree noted as $\boldsymbol { \mathrm { T ^ { \prime } } }$ w.r.t. the redesigned state, action, and reward. Further, to improve efficiency, during the partial retraining procedure, we only generate reward w.r.t. the queries in $\mathcal { Q } _ { u }$ that fall in the retrained nodes. # D. Workflow for Partial Retraining We summarize the partial retraining procedure as follows: (1) If there exists BMTree node satisfying the shift score filter, the node to retrain detection procedure is conducted (Alg. 1). Then, the retraining RL environment is initialized and the BMTree is regenerated based on the updated data and query. Particularly, if the retraining result does not meet expectation (e.g., with optimization ratio less than $1 \%$ ), the procedure will select and retrain more untrained nodes as in Section VI-C. input : $n$ -dimensional old and updated datasets $\mathcal { D } _ { o } , \mathcal { D } _ { u }$ , old and updated training workloads $\scriptstyle { \mathcal { Q } } _ { o }$ , $\mathcal { Q } _ { u }$ , BMTree T; output : Retrained BMTree $\boldsymbol { \mathrm { T ^ { \prime } } }$ ; 1 if $\exists \mathbb { N } \in \mathbb { T }$ with shi $\mathrm { { } } ^ { \mathfrak { c } } t _ { m } ( \mathtt { N } ) \geq \theta _ { s }$ then 2 $\mathcal { R } $ Retrain Detecto $\mathopen : \mathclose \bgroup \left( \mathrm { T } , \mathcal { D } _ { o } , \mathcal { D } _ { u } , \mathcal { Q } _ { o } , \mathcal { Q } _ { u } \aftergroup \egroup \right) \mathclose { : }$ ; 3 $S , \mathbb { T } _ { \mathtt { p } } = \mathtt { I n i t i a l } ( \mathcal { R } , \mathbb { T } )$ ; 4 $\mathcal { Q } _ { u } ^ { \prime } \dot { } \{ q | q \in \mathcal { Q } _ { u } \land \exists \}$ N s.t. N contains $q \}$ ; 5 $\mathbb { T } ^ { \prime } \gets \mathbb { M } \mathbb { C } \mathbb { T } \mathbb { S } \big ( \mathbb { T } _ { \mathbb { p } } , \mathcal { D } _ { u } , \mathcal { Q } _ { u } ^ { \prime } \big )$ ; 6 if limited optimization then Retrain $\boldsymbol { \mathrm { T } } ^ { \prime }$ ; 7 return T′ The pseudo-code of the retraining procedure is given in Alg. 2. If there exists a BMTree node that satisfy the shift score requirement (Line 1), it first detects the BMTree nodes to be retrained (Line 2). Then, the RL retrain environment, the partially deleted BMTree $\mathtt { T _ { p } }$ , and the initial state for RL training are initialized (Line 3), and the queries contained by the to-be-retrained BMTree nodes are selected for retraining (Line 4). The MCTS algorithm with the environment redesigned as above is applied to regenerate the BMTree w.r.t. the updated database and the query workload (Line 5). If $\boldsymbol { \mathrm { T ^ { \prime } } }$ has limited performance enhancement, when $\boldsymbol { \mathrm { T ^ { \prime } } }$ optimizes ScanRange by less than $1 \%$ improvement compared with the original T (Line 6). After retraining the BMTree nodes is complete, data is needed to update the SFC values. With BMTree being partially retrained, only the data located in the retrained subspaces should be updated. The retraining procedure also reduces the cost of the following index update procedure. TABLE I: Experiment Parameters. # VII. ANALYSIS AND DISCUSSION Injection and Monotonicity. We prove that piecewise SFCs modeled by the BMTree satisfy both injection and monotonicity properties. The proof is detailed in [29]. Time Complexity Analysis. We provide time complexities for SFC value computation and MCTS-based BMTree construction. The time complexity for computing the SFC value of $\mathbf { x }$ using the constructed BMTree is $O ( M )$ , where $M$ is the length of $C _ { \mathrm { T } } ( \mathbf { x } )$ . This complexity is comparable to other SFCs described by BMPs. For BMTree construction, the complexity of the MCTS BMTree construction is $O \left( M \cdot \left( N + | \mathcal { D } _ { s } | \left( M + \log | \mathcal { D } _ { s } | \right) + | \mathcal { Q } | \right) \right)$ , where $N$ is the child node size of the policy tree, $| \mathcal { D } _ { s } |$ and $| { \mathcal { Q } } |$ correspond to the size of the sampled data and query workloads. It takes at most $M$ actions to construct the BMTree. In each step of choosing an action, the selection step is bounded by the child node size $O ( N )$ ; the simulation time corresponds to the computation of ScanRange that takes $O ( M \cdot | \mathcal { D } _ { s } | )$ for SFC value computing, $O ( | \mathcal { D } _ { s } | \cdot \log ( | \mathcal { D } _ { s } | )$ to sort data, and $O ( | \mathcal { Q } | )$ to compute ScanRange for each query. For BMTree update, suppose the BMTree construction complexity is noted as $T ( \mathtt { B M T \_ T r a i n } )$ . The BMTree retraining time is bounded by $R _ { r c } { \cdot } T \big ( \mathtt { B M T \_ T r a i n } \big )$ with $R _ { r c }$ as the retraining constraint ratio, since the ratio will limit the retrain nodes and the depth of the BMTree that needs to be generated by the MCTS algorithm. # VIII. EVALUATION Experiments aim to evaluate the following: (1) The effectiveness of BMTree’s design, including (i) Evaluating the proposed piecewise SFC method vs. existing SFCs when applied to the SFC-based indexes vs. the other indexes, (ii) The BMTree under different settings (e.g., scalability, dimensionality, and aspect ratio), and (iii) Components of the BMTree by evaluating different BMTree variants. (2) The effectiveness of partial retraining, including (i) Evaluating the performance of partial retraining while varying distribution shift settings, and (ii) Evaluating the choice of parameters, e.g., the retraining constraint and the shift score threshold study during partial retraining. # A. Experimental Setup Dataset. We conduct experiments on both synthetic and real datasets. For synthetic datasets, we generate data points in the two-dimensional data space with a granularity size of $2 ^ { 2 0 } \times 2 ^ { 2 0 }$ that follow either uniform (denoted as UNI) or Gaussian distributions (denoted as GAU) with $\mu _ { d }$ as the center point of the space domain. Real data OSM-US contains about 100 Million spatial objects in the U.S. extracted from OpenStreetMap API [37], and TIGER [38] contains 2.3 Million water areas in North America cleaned by SpatialHadoop [39]. Query Workload. We follow [24, 12] to generate query workloads. We generate various types of window queries, where each query type has a fixed area selected from {230, 232, 234} and a fixed aspect ratio selected from $\{ 4 , 1 , 1 / 4 \}$ ; Each workload comprises multiple query types with different combinations of areas and ratios. We generate queries with Uniform (UNI) and Gausian (GAU) distributions (same as in [4, 12]), We also generate a skewed workload (denoted by SKE), in which queries follow Gaussian distributions with different $\mu$ values. Index Structures. To evaluate the performance of the proposed piecewise SFC compared with the existing SFCs, we integrate the proposed piecewise SFC and the baseline SFCs into both traditional indexes and learned index structures. First, we integrate the piecewise SFC (and baseline SFCs) into the PostgreSQL database system and a built-in ${ \bf B } ^ { + }$ -Tree variant in PostgreSQL is employed with SFC values as key values. Second, we use a learned spatial index, RSMI [40], to compare the performance of the piecewise SFC against the baseline SFCs within RSMI. The ${ \bf B } ^ { + }$ -Tree of PostgreSQL is a diskbased index while the released implementation of RSMI [40] is memory based. We choose them to evaluate the performance of the piecewise SFC under various scenarios. SFC Baselines. We choose the following SFC methods as our baselines: (1) Z-curve [20, 30]; (2) Hilbert Curve [21]; (3) QUILTS [24]. Evaluation Metrics. For experiments conducted with PostgreSQL, we use the I/O cost (I/O) recorded by PostgreSQL and the Query Latency (QL). For experiments under RSMI, we report the number of node accesses of its tree structure and QL for a fair comparison by following [40, 41]. Parameter Settings. Table I lists the parameters used in our experiments, and the default settings are in bold. We set the number of rollouts (as described in Section V) in MCTS at 10 by default. The max depth is the depth of the BMTree built via the RL model; the sampling rate (0.05 by default) is the rate of sampling training data for computing the ScanRange. Evaluation Platform. We train the BMTree with PyTorch 1.9, Python 3.8. The experiments are conducted on an 80-core server with an Intel(R) Xeon(R) Gold $6 2 4 8 ~ \mathrm { C P U @ 2 . 5 0 G H z }$ 64.0GB RAM, no GPU resource is leveraged. # B. Evaluation of the BMTree 1) Effectiveness: This experiment is to compare the effectiveness of the learned piecewise SFC in query processing against the other SFCs under both the PostgreSQL and the RSMI environments. We also compare an SFC-based index combined with the BMTree against the other indexes. For each experiment, we use 1000 windows queries, which are randomly generated by following respective distributions for training, and another 2000 different window queries following the same distribution for evaluation. Results on PostgreSQL. Figures 8a and 8b show the I/O and $\mathrm { Q L }$ on window queries. To ensure PostgreSQL conducts Node Access Query Latency (µs) 102 104 ×102 102 104 ×102 Z-curve Hibert QUILTS 2 Z-curve Hibert QUILTS 3 BMTree 1.5 BMTree 1.5 1 2 1 0.5 0.5 UNI SKE GAU UNI SKE GAU UNI SKE GAU UNI SKE GAU UNI SKE GAU UNI SKE GAU UNI SKE GAU UNI SKE GAU UNI GAU OSM-US TIGER UNI GAU OSM-US TIGER (a) Performance of Node Access. (b) Performance of Query Latency. Fig. 8: Results under PostgreSQL, where the first and the second lines under the $\mathbf { \boldsymbol { x } }$ bar denote the query and data distribution Fig. 9: Results using RSMI learned index structure. Fig. 10: Performance of $k \mathbf { N N }$ queries. Fig. 11: Optimization of window query & $k \mathrm { N N }$ query. indexscan during querying, both the bitmapscan and seqscan in PostgreSQL are disabled. We do not include the Hilbert curve for this experiment as the Hilbert curve requires additional structure and dedicated algorithm for returning accurate results for window queries, and PostgreSQL does not support them for the Hilbert curve. Observe that the proposed BMTree consistently outperforms the baselines in all the combinations of data and query distributions in terms of both I/O and QL. Between the two baselines, QUILTS performs worse for the SKE workload, and performs similarly as the Z-curve for UNI and GAU workloads. The reason is that our query workload contains queries with different aspect ratios (e.g., 4 and $1 / 4 )$ ), rather than queries with similar aspect ratios as it is used in QUILTS [24]. QUILTS can only choose queries with a particular aspect ratio to optimize, and this results in poor performance for queries with different aspect ratios. The BMTree outperforms the Zcurve by $5 . 2 \% - 3 9 . 1 \%$ (resp. $7 . 7 \% - 5 9 . 8 \%$ , $6 . 3 \% - 2 9 . 8 \%$ and $2 5 . 1 \% - 7 7 . 8 \% )$ in terms of I/O on UNI (resp. GAU, OSM-US ,and TIGER) datasets across the various workloads. The results in terms of QL are consistent with those of I/O. BMTree’s superior performance is because (1) The BMTree generates piecewise SFCs to handle distinct query distributions, and (2) The BMTree is equipped with effective learning to generate BMPs and subspaces. Notice that under the UNI workload, the BMTree outperforms the Z-curve by $2 5 . 1 \%$ on TIGER while it only outperforms Z-curve slightly on the other three datasets. This is expected: Under the UNI query workload, the BMTree can only make use of the data but not the query distributions to optimize performance; TIGER is very skewed and the BMTree can capture TIGER’s skewed data nature. Results on RSMI. The original RSMI [12] uses the Hilbert curve, and we include it as a baseline for this experiment as RSMI returns approximate results for all curves. All the curves achieve comparable recall $( 9 9 . 5 \%$ or above) using RSMI’s algorithms for window queries. Figures 9a and 9b show the number of node accesses and QL for all curves when using RSMI. Observe that the BMTree consistently outperforms all baselines. The BMTree outperforms the $Z$ -curve by $1 8 . 2 \% -$ $2 9 . 0 \%$ (resp. $1 3 . 7 \% - 2 8 . 4 \%$ , $1 3 . 5 \% - 2 6 . 5 \%$ , and $2 . 8 \% 6 - 2 5 . 3 \%$ ) in terms of the number of node accesses on the UNI (resp. GAU, US-OSM, and TIGER) datasets. Also, observe that the Hilbert curve achieves similar performance to that of the BMTree on the GAU dataset that could be attributed to its good tolerance to data skew [42]. Comparison with Other Indexes. We compare against the performance of two SFC-based indexes, RSMI and ZM combining our BMTree, with baseline indexes including (1) two R-tree variants: STR [43] and $\mathbf { R } ^ { * }$ Tree [44]; and (2) two partition-based methods: Grid-File and Quad-Tree. The results are given in [29] that reveal the generality of the BMTree on enhancing query performance combining different indexes. I/O Cost Ratio (%) Query Latency Ratio (%) 150 Z-curve QUILTS BMTree 150 Z-curve QUILTS BMTree 100 100 50 50 0 0 UNI GAU OSM-US TIGER UNI GAU OSM-US TIGER (a) I/O Cost (b) Query Latency Window Query I/O Cost kNN Query I/O Cost 600 Z-curve BMTree 84 Z-curve BMTree 500 82 400 80 300 78 200 0% 25% 50% 75% 100% 0% 25% 50% 75% 100% Percentage of kNN training queries Percentage of kNN training queries (a) Window Query I/O (b) kNN Query I/O Effect on $k \mathbf { N N }$ Queries. The piecewise SFC is learned to optimize window queries. Here, we investigate its influence on the performance of $k \mathrm { N N }$ queries. We generate $1 , 0 0 0 \ k \mathrm { N N }$ query points following the data distribution, and we apply the $k \mathrm { N N }$ algorithm [12] in PostgreSQL with $k$ set to 25. We report the $_ \mathrm { I / O }$ and QL ratios in Figure 10a and 10b that are the ratio of results of the different curves divided by the result of the Z-curve. The BMTree performs slightly better than the baselines on GAU and OSM-US while the $Z$ -curve is slightly better on UNI and TIGER. Thus, while the piecewise SFC is optimized for window queries, its $k \mathbf { N N }$ query performance is not compromised. Optimizing Window and $k \mathbf { N N }$ queries. We evaluate the performance when window queries and $k \mathrm { N N }$ queries are optimized together. To optimize the BMTree for $k \mathrm { N N }$ queries, we convert $k \mathrm { N N }$ queries into window queries by following [12] and include them in the training workload. Then, we vary the weight of the objective based on $k \mathrm { N N }$ queries relative to window queries from $0 \%$ to $1 0 0 \%$ during training. Figures 11a and 11b give the window and $k \mathrm { N N }$ query I/Os. Observe that as the weight increases, the window query I/O tends to increase while the $k \mathrm { N N }$ query $\mathrm { I } / \mathrm { O }$ tends to decrease. Also, observe that when the weight is between $2 5 \%$ and $7 5 \%$ , the performance of the window query only mildly degrades, while the performance of $k \mathrm { N N }$ query is better than that based on the Z-curve. The results show the potential of the BMTree to optimize the two query types together. 2) Effect of Varying the Settings: We evaluate the performance of the BMTree under various settings: dataset/query size, dimensionality, and window aspect ratio. More settings can be found in [29]. Scalability of the Learned SFCs. To evaluate the scalability of the BMTree, we evaluate the performance of the SFCs by varying data size from 0.1 to 150 Million. We construct the BMTree using $1 0 ^ { 5 }$ sampled data points as input for RL training and the others follow the default settings. The results are in Figure 12. Observe that the BMTree displays a linear trend for both I/O and QL when data size increases. We observe similar trends for baselines. Effect of Higher Dimensionality. To evaluate the effect of dimensionality on the effectiveness of the learned SFC, we vary the dimensionality from 2 to 6 on the datasets for both the uniform and normal distributions. We report the IO in Figure 13. The BMTree consistently outperforms the baselines, and saves up to $54 \%$ of the I/O cost compared to the best Zcurve baseline. This demonstrates that the BMTree generalizes well on data with more than two dimensions. Fig. 13: I/O cost vs dimensionality. Fig. 12: Performance vs dataset size. Fig. 14: Varying query aspect ratio and selectivity. Fig. 15: I/O Cost on BMTree Variants. Effect of Varying Query Aspect Ratio and Selectivity. (1) We evaluate the BMTree performance by varying query aspect ratios from $\{ 4 \ , \ { \frac { 1 } { 4 } } \}$ to $\{ 1 2 8 , \frac { 1 } { 1 2 8 } \}$ , and the results are reported in Figure 14a. Observe that the BMTree performs consistently better than the other SFCs across different aspect ratios including very wide ones. (2) We vary query selectivity from $0 . 0 0 0 1 \%$ to $1 \%$ , and report the results in Figure 14b. Observe that the improvement of the BMTree is subtle under very small query range. This is due to the fact that for small query ranges, the points that are within a range are few, and the index tends to perform similarly for different SFCs. I/O Cost I/O Cost Ratio (%) Z-curve 150 Z-curve QUILTS BMTree 2,000 QUILTS 100 BMTree 1,000 50 0 4& 1 8& 18 16& 16 32& 312 64& 614 128& 1218 0.0001 0.001 0.01 0.1 1 Query Aspect Ratio Query Selectivity (%) (a) Varying aspect ratio (b) Varying selectivity 3) Evaluating BMTree Variants: We study 4 BMTree variants: BMTree-Data Driven (with dataset only), BMTreenoGAS (with no GAS algorithm), BMTree-greedy (pure greedy), and BMTree-LMT (with limited BMPs). Results are in Figure 15. (1) BMTree-DD. We evaluate the BMTree I/O Cost Ratio (%) BMTree BMTree-DD BMTree-noGAS BMTree-greedy BMTree-LMT 100 R 50 UNI SKE GAU UNI SKE GAU UNI SKE GAU UNI SKE GAU UNI GAU OSM-US TIGER performance when the query workload is not available. We generate training queries for the BMTree by following the dataset’s distribution. From Figure 15, observe that BMTreeDD performs comparable to the BMTree on the UNI workloads for all datasets. However, on the SKE workload, the BMTree performs generally much better. (2) BMTree-noGAS. We evaluate the effectiveness of the GAS algorithm. Observe the performance drop compared to the MCTS using GAS. This shows the effect of GAS. (3) BMTree-greedy. We apply GAS for all action selections, and build a purely greedy based BMTree. Observe that MCTS with GAS outperforms both BMTree-noGAS and BMTree-greedy. This indicates a synergistic improvement of MCTS over GAS. (4) BMTreeLMT. We consider all BMPs, and design a baseline BMTree where only the $Z _ { - }$ and C-curves are allowed to be assigned to the subspaces. We observe a significant improvement in the BMTree over using the $Z -$ and C-curves alone. This demonstrates the necessity of considering all BMPs. # C. Effectiveness of Partial Retraining of the BMTree 1) Varying Distribution Shift Settings: We evaluate the effectiveness of our proposed partial retraining mechanism. Specifically, we evaluate three situations: data shift while the queries are fixed, query shift while the data is fixed, and the composed scenario. We compare three methods: (1) Keeping the original BMTree unchanged (noted as $\mathsf { B M T - O }$ ), (2) Fully retrained BMTree (noted as BMT-FR), and (3) Partially retrained BMTree (noted as BMT-PR). We restrict the partial retrain constraint ratio to 0.5, where at most half of the space area can be retrained. Also, we evaluate the performance while varying the retrain constraints. Evaluation of Data Shift. We evaluate 3 different metrics: the I/O Cost and Query Latency of the constructed BMTree, and the training time needed to retrain the BMTree. The data shifts from the GAU to the UNI distribution. The results are in Fig. 16. In Fig. 16a, the partially retrained BMTree BMT-PR achieves a performance increase on $\mathrm { I } / \mathrm { O }$ cost compared to the original BMTree BMT-O, while optimizing the percentage from $9 . 2 \%$ to $1 2 . 1 \%$ and varying the shift percentage. Compared to the fully retrained BMTree BMT-FR, BMT-PR achieves an average of $9 0 . 6 \%$ performance improvement achieved by BMT-FR. Under a $9 0 \%$ data shift, BMT-PR outperforms BMT-FR and achieves over $2 . 3 \times$ reduction in I/O cost compared with BMT-FR. The reason is that BMT-PR allows the agent to focus on optimizing the subspace with vital distribution changes, which allows for the partially retrained BMTree to achieve a better performance on this focused subspace. The results of the query latency are generally consistent with the results of I/O Cost, in which BMT-PR achieves an average of $9 1 . 7 \%$ performance improvement achieved by BMT-TR, as in Fig. 16b. As for the training time (Fig. 16c), BMT-FR costs from $7 8 5 7 s$ to $8 3 1 4 s$ $8 1 2 5 . 5 s$ on average) to retrain the BMTree from scratch, while BMT-PR costs from $5 5 9 . 2 s$ to $3 7 7 8 . 8 s$ $1 8 3 3 . 3 s$ on average). BMT-PR achieves approximately $4 . 4 \times$ reduction of training time compared with BMT-FR, that is aligned with the time complexity estimation, where with a retraining constraint ratio of $R _ { r c }$ , the training time of BMT-PR is upper bounded by $R _ { r c } \cdot \tt t i m e ( B M T - P R )$ . Evaluation of Query Shift. We proceed to evaluate the effect of query workload shift. Under the GAU data, we shift the distribution of the query workload. Specifically, we vary the $\mu$ values of the Gaussian distributions of queries, and generate 2 skew query workloads, namely $\mathtt { S K E } _ { 1 }$ and $\mathrm { S K E } _ { 2 }$ , respectively. The query workload is shifted from $\mathrm { S K E } _ { 1 }$ to $\mathrm { S K E } _ { 2 }$ . We estimate the retrain performance by varying the shift percentage. The results are in Fig. 17. Both the fully and partially retrained BMTrees BMT- FR and BMT-PR have limited optimization compared to the original BMTree (less than $1 \%$ on I/O Cost) before the shift reaches $5 0 \%$ percentage. However, when the shift percentage reaches $7 0 \%$ , the optimizing potential becomes vital. BMT-PR reduces I/O Cost from $7 . 3 \%$ to $1 6 . 7 \%$ compared with the BMT-O. We observe that BMT-PR achieves better performance on I/O Cost compared with BMT-FR from shift percentage $7 0 \%$ to $9 0 \%$ (over $1 . 8 \times$ at $9 0 \%$ ). This reveals that retraining a subspace with a significant change in query workload may significantly enhance performance compared with training a BMTree for the whole data space. The query latency results are similar in I/O Cost (as in Fig. 17b) that is consistent with the data shift situation. The training time (Fig. 17c) also aligns with the time complexity evaluation. BMT-FR spends from $7 2 9 5 . 5 s$ to $8 4 8 5 . 8 s$ (on average, 7716.9s) to retrain the BMTree, while BMT-PR spends from $2 5 5 . 9 s$ to $1 5 7 1 . 1 s$ (on average, 1237.1s), achieves approximately over $6 . 2 \times$ reduction in training time. Evaluation of Mixed Shift of Data and Query. We evaluate the scenario when both data and query shift, and the shift settings of data and query follow the former experiments. We select shift percentages from: $\{ 2 5 \% , 5 0 \% , 7 5 \% \}$ for both data and query. The results are in Fig. 18. The partially retrained BMTree BMT-PR achieves different levels of optimizations when varying data and query shift percentages. ${ \tt B M T - P R }$ achieves remarkable I/O Cost reduction when query or data shift reaches $7 5 \%$ (the 3rd row and the 3rd column in Fig. 18 denote when data and query shift reaches $7 5 \%$ , respectively), which achieves on average $8 . 3 \%$ (resp. $1 6 . 5 \%$ ) reduction in I/O cost under $7 5 \%$ data shift (resp. query shift) compared with BMT-O. BMT-PR outperforms BMT-FR in certain cases (e.g., $2 5 \% \times 2 5 \%$ data-query shift), and achieves competitive performance in the majority of situations. BMT-PR remains efficient compared with BMT-FR on training time. Particularly, under $5 0 \%$ query shift, the training time reduction of BMT-PR is less than $2 \times$ compared with BMT-FR. Observe that the second partial retraining is triggered since the first partial retraining does not achieve notable optimization. Compared with BMT-FR, a full retraining is preferred as the action in the root node of the BMTree needs to be modified. 2) Varying the Retraining Hyperparameters: We evaluate how the retraining hyperparameters affect I/O Cost. We consider the retraining constraint ratio and the shift score threshold. We conduct the hyperparameter estimations under $7 5 \% \times 7 5 \%$ data and query shifts. The results are in Fig. 19. (1) The Retraining Constraint Ratio $R _ { r c }$ . We vary $R _ { r c }$ from 0.1 to 1 to study how $R _ { r c }$ affects the retraining performance. As in Fig. 19a, BMT-PR achieves almost no performance enhancement compared to $\mathtt { B M T - O }$ , as the very small retraining constraint ratios (0.1 and 0.2) limit the ability of the retrain method to retrain the important nodes that do not satisfy the constraint. BMT-PR achieves optimal performance with a constraint ratio of 0.5 that allows retraining nodes that mostly affect query performance. (2) The Shift Score Threshold. We vary the shift score threshold from 0.1 to 0.5. As in Fig. 19b, observe that the threshold from 0.1 to 0.35, BMT-PR hassimilar performance compared to a full retrain. When the threshold is 0.4 and above, it filters nodes that achieve notable performance improvement (with threshold lower than 0.4) and become ineffective. This identifies the best choices for the shift score threshold. Fig. 16: Evaluation of data shift while fixing the queries. The data is shifted from a GAU to UNI with varying UNI percentage from $1 0 \%$ to $9 0 \%$ . I/O Cost Query Latency (µs) Training Time (s) 102 104 103 1.5 BMT-O BMT-FR 8 8 6 BMT-PR 1 6 4 4 0.5 2 2 0 10 20 30 40 50 60 70 80 90 (%) 10 20 30 40 50 60 70 80 90 (%) 10 20 30 40 50 60 70 80 90 (%) (a) Performance of I/O Cost. (b) Performance of Query Latency. (c) Performance of Training Latency. Fig. 17: Query shift while fixing the data. Query workload is shifted from $1 0 \%$ to $9 0 \%$ of the new query workload. Fig. 18: Evaluation of mixed shift of data and query. Fig. 19: Varying retrain constraint ratio & shift score threshold. # IX. RELATED WORK Space-Filling Curves (SFCs). Many SFCs have been developed. The C-curve [19] organizes the data points dimension at a time. The $Z$ and Hilbert curves [16, 17, 18, 19, 20, 21, 22] are widely used in index design. Despite the success of these SFCs, they do not consider data and query workload distributions. QUILTS [24] is proposed to consider data and query distributions in designing the mapping function of SFCs. All these SFCs, including QUILTS, adopt a single mapping scheme that may not always be suitable for the whole data space and query workload (Section I). This paper proposes the first piecewise SFC that uses different mapping functions for the different data subspaces. It considers both data and query distributions. Furthermore, we propose a reinforcement learning-based method to learn SFCs to directly optimize performance. Following the BMTree [29], there is work [45, 46] that leverages learning to construct SFCs. The proposed piecewise SFC design potentially extends the design space of SFCs. Moreover, these studies do not consider fast updating of SFCs. This paper proposes partially regenerating the BMTree, which reduces the update cost and only requires part of the data to update the SFC values. SFC-based Index Structures. SFCs are used for indexing multi-dimensional data, and is widely adopted by DBMSs. Also, SFCs are essential for learned multi-dimensional indexes (e.g., [11, 12, 47]). ZM [11] combines a Z-curve with a learned index, namely RMI [2]. RSMI [12] applies the Hilbert curve together with a learned index structure for spatial data. Pai et al. [48] present preliminary results on the instanceoptimal Z-index based on the $Z$ -curve that adapts to data and workload. SFC-based indexes can also be applied for data skipping [49, 50, 51] that aim to partition and organize data into data pages so that querying algorithms only access pages that are relevant to a query. An SFC-based approach [15] maps multidimensional points to scalar values using an SFC, and uses the ${ \bf B } ^ { + }$ -Tree or range-partitioned key-value store (e.g., H-base) for partitioning and organizing data. Analysis of SFCs. Many studies, e.g., [52, 20, 53, 21, 42, 24] evaluate SFCs. Mokbel et al. [52, 20, 53] discuss the characteristics of good SFCs. Moon et al. [21] propose the number of disk seeks during query processing. Xu et al. [42] prove that the Hilbert curve is a preferable SFC from that respect. Nishimura et al. [24] propose a cohesion cost that evaluates how good SFCs cluster data. Recently, Liu et al. [46] propose a cost model that evaluates query performance of SFCs. It uses the BMTree and speedups reward computing. Reinforcement Learning (RL) in Indexing. Our method of generating SFCs is based on RL techniques [54, 27]. There are several recent studies, e.g., [51, 55, 41] on applying RL to generate tree structures. Yang et al. [51] construct the Qdtree for partitioning data into blocks on storage with Proximal Policy Optimization networks (PPO) [34]. Gu et al. [41] utilize RL to construct the R-tree for answering spatial queries [17], and Neurocuts [55] constructs a decision tree using a RL agent. These RL designs are not suitable for learning piecewise SFCs. Our design of RL models is based on MCTS and is different from the designs in these studies.
Space-filling curves (SFC, for short) have been widely applied to index multi-dimensional data, which first maps the data to one dimension, and then a one-dimensional indexing method, e.g., the B-tree indexes the mapped data. Existing SFCs adopt a single mapping scheme for the whole data space. However, a single mapping scheme often does not perform well on all the data space. In this paper, we propose a new type of SFC called piecewise SFCs that adopts different mapping schemes for different data subspaces. Specifically, we propose a data structure termed the Bit Merging tree (BMTree) that can generate data subspaces and their SFCs simultaneously, and achieve desirable properties of the SFC for the whole data space. Furthermore, we develop a reinforcement learning-based solution to build the BMTree, aiming to achieve excellent query performance. To update the BMTree efficiently when the distributions of data and/or queries change, we develop a new mechanism that achieves fast detection of distribution shifts in data and queries, and enables partial retraining of the BMTree. The retraining mechanism achieves performance enhancement efficiently since it avoids retraining the BMTree from scratch. Extensive experiments show the effectiveness and efficiency of the BMTree with the proposed learning-based methods.
[ "cs.DB" ]
# I. INTRODUCTION Yangliuqing woodblock prints, esteemed as a significant facet of China's intangible cultural heritage (Qian, 2023), are renowned for their intricate textures, vibrant colors, and centuries-old craftsmanship (Liu, 2012). Originating during the Ming Dynasty (1368–1644) (Zhang, B., & Romainoor, N. H. 2023), these prints uniquely blend woodblock printing with hand-painting techniques (Dong Shu et al.,2024; Nag, D.,2024), reflecting traditional Chinese aesthetics and societal values (Wang, 2022). In today's rapidly modernizing society, Yangliuqing woodblock prints confront substantial challenges. Traditional production processes are intricate and time-consuming, involving meticulous handcrafting that is increasingly rare (Wang, X., & Aoki, N., 2017; Aoki, N. ,2021). This complexity hinders the prints' appeal to contemporary, digitally oriented audiences who favor more accessible art forms (Chen, C., & Wang, H., 2024; Ai, Q et al.,2024; Tsatsanashvili, A.,2024). Consequently, balancing the preservation of authenticity with the need for creative innovation has become imperative to revitalize these prints, ensuring they remain culturally relevant and economically viable (Pawar, H.,2025; Sullivan, A. M.,2015). To address the challenges of preserving traditional art forms while introducing modern creativity, our study explores a hybrid digital methodology that integrates advanced artificial intelligence tools with traditional art references (Wang, T., & Wu, D.,2024; Gîrbacia, F.,2024). Specifically, we utilize two technological platforms: DeepSeek (Neha, F., & Bhati, D.,2025), a large language model (LLM), and MidJourney (Neha et al.,2025), an independent research lab dedicated to expanding human imaginative capabilities. This combination aims to generate new image portfolios that honor historical aesthetics while incorporating fresh creative elements (Chiou et al.,2023). DeepSeek is renowned for its advanced prompt generation capabilities, enabling the extraction of key thematic elements from traditional Yangliuqing prints (Wu et al.,2024). Its ability to produce detailed, culturally informed prompts makes it an invaluable tool for bridging the gap between historical visual language and contemporary design needs (AlAfnan, M. A. 2025). Recent industry reports highlight DeepSeek's innovative approach, significantly contributing to breakthroughs in AI-driven image synthesis (Islam et al.,2025). Complementing DeepSeek, MidJourney functions as our primary image generation engine. Renowned for producing visually striking and detailed outputs, MidJourney translates textual prompts into high-quality images that capture both the traditional charm and modern aesthetics required for renewed cultural expression (Anantrasirichai et al.,2025). Its proven track record in digital art creation makes it particularly suitable for synthesizing the complex visual textures of woodblock prints, ensuring that the end results are consistent with historical norms and resonate with contemporary artistic sensibilities (Bakumenko, S. 2024). By integrating DeepSeek's prompt (Xian, L,2025) generation with MidJourney's image synthesis, this methodology offers a novel approach to revitalizing traditional art forms, balancing preservation with innovation (Morgado, L.,2025). Yangliuqing woodblock prints often depict famous stories, such as the Romance of the Three Kingdoms or the love story of Niulang and Zhinu (Qian, J. 2023). Themes play a crucial role in these prints, and in the context of the 21st century (Pearce, N.,2021), the fight against COVID-19 has become an important narrative (Devera et al.,2024). The victorious battle against the pandemic is a source of collective joy, and we have selected this theme as the central focus of this study (Tao et al.,2024). Recognizing that no single approach can fully encapsulate the multifaceted nature of Yangliuqing prints, our research will exploration of four methods for generating image portfolios. These methods were designed to test various combinations of AI-generated prompts and traditional reference imagery: Portfolio 1: This approach utilizes DeepSeek-generated key prompts (focused on fighting COVID-19 and portraying happy winners mixed yangliuqing woodblock prints style) paired exclusively with MidJourney-generated prints. Portfolio 2: By incorporating original Yangliuqing prints as reference images alongside DeepSeekgenerated key prompts, this method enhanced consistency. Portfolio 3: Building on the previous techniques, this method combines DeepSeek-generated theme prompts (focused on fighting COVID-19 and portraying happy winners) with MidJourney-generated theme images, while also including original prints as references. Portfolio 4: This method integrates DeepSeek-generated theme prompts, MidJourney-generated theme images, original Yangliuqing prints, and DeepSeek-generated key prompts to create a comprehensive approach. By fusing DeepSeek’s culturally rich prompt generation with MidJourney’s powerful image synthesis (Ai, Q.,2024), we can create a new paradigm in which traditional art forms can be reimagined for the modern era. This hybrid strategy not only enhances the visual quality and thematic authenticity of the generated images but also offers a scalable model for promoting cultural heritage in various digital and commercial applications. In addition to technological innovation, our work contributes to broader discussions on cultural preservation and creative industry development. As modern consumers increasingly seek products that connect them with their cultural roots in fresh and engaging ways, our findings offer valuable insights for cultural and creative product developers. The successful integration of these digital techniques can inspire similar initiatives across other domains of intangible cultural heritage, ensuring that traditional crafts not only survive but thrive in today’s dynamic digital landscape. II. Study Framework and Methodology FIGURE 1. FrameWork of Study Work. Figure 1 illustrated the study's workflow framework. In this study, we explored four methodologies (details shown in Methodologies) to generate Yangliuqing-style New Year woodblock prints using advanced artificial intelligence tools, specifically DeepSeek-R1 and Midjourney (details shown in overview of DeepSeek-R1 and Midjourney). Each method leveraged these AI technologies to create art that reflected the traditional aesthetics of Yangliuqing woodblock printing. # A. Overview of DeepSeek-R1 and Midjourney DeepSeek-R1(DeepSeek-AI et al., 2025): DeepSeek is a Chinese AI company that has made significant advancements in artificial intelligence. Their flagship model, DeepSeek-R1, has demonstrated remarkable reasoning capabilities through reinforcement learning, achieving performance comparable to leading AI models like OpenAI's GPT series. Notably, DeepSeek-R1 was developed with a focus on reasoning without extensive supervised fine-tuning, emphasizing the model's ability to learn and adapt through reinforcement learning techniques. Midjourney1: Midjourney is an AI-driven image generation platform that transforms textual descriptions into visual art. Accessible via Discord, users input prompts to generate images, with the platform offering various parameters to fine-tune the output, such as aspect ratio, quality, and style. Midjourney has been instrumental in enabling users to create diverse and imaginative images based on their textual descriptions. TABLE I The parameters for DeepSeek-R1 and Midjourney # B. Original traditional Yangliuqing prints We collected some traditional Yangliuqing prints from (Xu et al., 2023.) as reference images, as they not only preserve the original artistic features, such as intricate line work, vibrant colors, and symbolic compositions, but also provide essential visual references to guide our AI-generated artwork. These prints serve as a benchmark for maintaining the authenticity and stylistic integrity of Yangliuqing woodblock printing in our study. An example is shown below: FIGURE 2. Example for Yangliuqing prints. # C. Methodology # 1) DIRECT PROMPT-BASED GENERATION USING MIDJOURNEY (PORTFOLIO 1) In this approach, thematic keywords are input into DeepSeek, which then generates prompts for MidJourney based on these keywords and a predefined prompt template, details shown in tableⅡ. TABLE Ⅱ Prompt Engineering Steps for Key Prompt Generation Midjourney then interprets this key prompt to generate corresponding images. To ensure the generated images align with the desired aesthetic, parameters such as aspect ratio, chaos level, and quality are adjusted accordingly (details shown in TABLE I). # 2) COMBINED PROMPT AND REFERENCE IMAGE APPROACH (PORTFOLIO 2) First, the key prompt was generated by DeepSeek (TABLE I). Simultaneously, a collection of original Yangliuqing New Year woodblock prints exemplifying the desired themes and styles was curated. These reference images were then input into MidJourney along with the prompts. MidJourney's AI model utilized both the textual key prompts and visual references from reference images to generate new prints. 3) THEMATIC IMAGE GENERATION FOLLOWED BY STYLE TRANSFER (PORTFOLIO 3) In this approach, thematic keywords are input into DeepSeek to generate themed prompts.details shown in below: Prompt Engineering Steps for theme Prompt Generation After the thematic prompt was generated, MidJourney produced theme-based paintings. FIGURE 3. Example of Theme Images. These initial AI-generated artworks captured the essence of the specified themes. Next, a selection of these generated paintings was combined with reference images of traditional Yangliuqing prints and reinput into MidJourney. This process allowed the AI to blend the stylistic elements of the original woodblock prints with the newly generated themes, resulting in refined prints that maintained the authenticity and artistic characteristics of Yangliuqing New Year woodblock printing. # 4) COMBINED THEMATIC AND STYLE TRANSFER APPROACH (PORTFOLIO 4) The thematic paintings, which combine the generated theme-based artworks with reference images of traditional Yangliuqing prints, are input into MidJourney along with the key prompt (TableⅡ). This process allows MidJourney to generate corresponding images that capture the desired thematic essence while adhering to the style of the reference prints. This method effectively integrates thematic image generation with style transfer, blending both textual prompts and reference images to produce artworks that reflect both the specified themes and the traditional aesthetics of Yangliuqing woodblock printing. # 5) EVALUATING Two complementary approaches were used for the evaluation of the generated images. First, we used the Frechet inception distance (FID)( $\mathrm { \Delta Y u }$ et al., 2021) score to quantify the similarity between the generated images and the reference Yang Liuqing prints.The FID score measures the distance between the feature distributions of the refer images and Ai-Genrated Portfolios(1 to 4), with lower FID scores indicating a higher degree of similarity. $$ \begin{array} { r } { F I D = \parallel \mu _ { 1 } - \mu _ { 2 } \parallel ^ { 2 } + T r \left( { \cal { \Sigma } } _ { 1 } + { \cal { \Sigma } } _ { 2 } - 2 ( { \cal { \Sigma } } _ { 1 } { \cal { \Sigma } } _ { 2 } ) ^ { \frac { 1 } { 2 } } \right) , } \end{array} $$ Where $\mu _ { 1 } , \mu _ { 2 }$ are the means of the refer image and Ai-Genrated Portfolios, $\Sigma _ { 1 } , \Sigma _ { 2 }$ are the covariances of them and $T r$ denotes the trace of a matrix. Next, we calculated the FID scores for all AI-generated portfolios in comparison with the reference images. Finally, we obtained their mean $( \mu )$ and standard deviation (σ) for evaluating. In addition to the computational evaluation, we conducted a user-based questionnaire (details shown in TABLE Ⅳ) to further validate the authenticity and artistic quality of the generated images. We showed participants a set of reference Yang Liuqing prints alongside the generated portfolios (Portfolio 1 to 4) and asked them to rate these portfolios in terms of cultural heritage, innovation, composition, and purchasing appeal. By combining objective quantitative (details shown in TABLE Ⅳ) assessment with subjective human judgment, this dual approach ensured a more comprehensive assessment of the extent to which the AIgenerated prints retained the essence of traditional Yangliuqing woodblock prints. # Questionnaire # Comparison of FID Scores FID Score Distribution with Mean(μ) and Standard Deviation(g) IGURE 4. FID Scores Distribution Between Ai-Generated Portfolios and Refer Images. Figure 4 illustrates the FID score distribution between AI-generated portfolios and reference images.Portfolio 1 exhibits the highest mean FID score $( \mu = 2 4 5 . 2 )$ and the largest standard deviation (σ $\mathbf { \Sigma } = \ 1 5 . 3 )$ ), indicating that it struggles with consistency in generating outputs that align with authentic Yangliuqing features. The substantial variability, with FID scores ranging from 220.18 to 267.44 (a range of 47.26), underscores significant deviations across generated images, suggesting that the outputs are erratic and unreliable. The high mean FID score implies that the generated images fail to accurately replicate the intricate details and aesthetic qualities of the traditional Yangliuqing style, revealing significant shortcomings in the model's understanding and execution of traditional artistic motifs. This highlights the importance of using original Yangliuqing prints as references. Portfolio 2 demonstrates a more balanced performance, with a mean FID score of 161.5 and a relatively low standard deviation ( $_ { \mathrm { ~ O ~ } } = 5 . 1 _ { , }$ ). The FID scores are tightly clustered between 153.88 and 168.91, indicating stable performance compared to Portfolio 1. The narrow range (15.03) and low variability suggest that the model consistently generates outputs but tends to adhere closely to a standard pattern, avoiding significant variation in artistic elements. This may be due to the lack of referenced theme images, which could have introduced more creativity and diversity in the generated outputs. Portfolio 3 falls near the borderline of acceptability with a mean FID score of 194.4 and a moderate standard deviation $( \sigma = 5 . 8 )$ . The variability in this portfolio is higher than in Portfolio 2, with FID scores showing a bimodal distribution, with peaks around 185–190 and 198–202. This suggests that while many outputs are relatively close to acceptable standards (below 200), there are occasional failures where the FID score exceeds 200, signaling the presence of significant artifacts or inaccuracies. This may be due to the absence of key prompts as references, which could have helped the model produce more consistent results. Portfolio 4 stands out as the highest-performing model, with the lowest mean FID score $\mathrm { ~ \AA ~ } ( \mu = 1 5 0 . 2 )$ and minimal variability $( \sigma = 4 . 9 )$ ). The FID scores range narrowly between 141.88 and 157.78, with a small range of 15.9. The consistency in the results is remarkable, with all FID scores falling well below 160, and the best result (141.88) approaching state-of-the-art performance. Based on the FID analysis, Portfolio 4 emerges as the top performer, with the lowest FID score, minimal variability, and the strongest ability to preserve the traditional features of Yangliuqing woodcut art. While Portfolio 2 offers stable outputs, attributed to the use of original Yangliuqing prints and key prompts, its creative potential may be somewhat limited. In contrast, Portfolios 1 and 3 face challenges in consistency and authenticity due to the lack of key prompts or reference theme images. Therefore, the MidJourneygenerated Portfolio 4, incorporating referenced key prompts, theme images, and original Yangliuqing prints, should be considered the best-performing model, offering the most reliable and authentic results. # Questionnaire # Participants Analysis FIGURE 5. Questionnaire Analysis within Age, academic background and level of understanding. Based on Figure 4, we found a total of 62 responses were collected from the questionnaire, of which the highest proportion (about $5 8 . 6 \%$ ) was from the general audience aged 18-24, and the overall awareness of Yangliuqing woodblock prints was relatively low, with about $5 9 . 7 \%$ of the respondents indicating that they had no knowledge of them at all, and only one person $( 1 . 6 \% )$ being very familiar with them. Among the participants, $1 6 . 1 \%$ were art professionals and $2 4 . 2 \%$ were art enthusiasts, and their recognition was relatively high, but they still mainly had “ somewhat aware” . Generally speaking, the popularity of Yangliuqing woodblock prints is relatively low among young people, especially those with non-artistic backgrounds, so future promotion should focus on the general audience and strengthen cultural dissemination and education. # Portfolio Performance Analysis FIGURE 6. Portfolio Performance Distribution by Theme, Traditional, Innovation and Total Quality FIGURE 7. Comparison of Portfolios Across Categories. After analyzing the data collected from the theme-related question (Figures 6 and 7), Portfolio 1, which utilized DeepSeek-generated key prompts and MidJourney-generated prints, achieved a moderate average score of 6.2. While some prints captured the essence of Yangliuqing themes, the high variability in scores (ranging from 1 to 10) indicates inconsistency. Additionally, the generated prints focused on thematic style rather than the original Yangliuqing artistic style (details in Figure 8).Portfolio 2, which incorporated original Yangliuqing prints as references, showed a slight improvement with an average score of 6.8. The generated images successfully integrated both the thematic focus and the authentic Yangliuqing style (details in Figure 9). However, score variability persisted, suggesting that references alone were insufficient to fully align the prints with Yangliuqing’s artistic themes. Portfolio 3, which combined DeepSeekgenerated theme prompts and MidJourney-generated theme images alongside original references, underperformed with an average score of 5.9. While the generated images largely adhered to the original Yangliuqing style, they failed to incorporate the intended thematic elements—particularly the absence of any COVID-19-related features, such as masks (details in Figure 10). This suggests that relying solely on images without additional prompts may have resulted in misalignment with the intended themes. Portfolio 4, which integrated DeepSeek-generated theme prompts, MidJourney-generated theme images, original references, and DeepSeek-generated key prompts, achieved the highest average score of 8.1. This approach successfully blended both thematic elements and the original Yangliuqing style (details in Figure 11). These results indicate that a multi-layered approach is the most effective for capturing the artistic themes of Yangliuqing. FIGURE 8. Example of Portfolio1. FIGURE 9. Example of Portfolio2. In the analysis of traditional style, Portfolio 1 demonstrated moderate adherence, with an average score of 6.5. However, the presence of low scores (e.g., 1 or 2) indicates significant deviations in certain prints. Additionally, the generated images lacked any original Yangliuqing style (Figure 8). Portfolio2 improved with an average score of 7.1, as the inclusion of original references helped ground the prints in traditional aesthetics. Portfolio3, despite adding theme prompts and theme images, achieved a similar average score of 6.8, suggesting that these components did not significantly enhance traditional adherence. Portfolio4 outperformed the others with an average score of 8.3, demonstrating that the combination of theme prompts, theme images, original references, and key prompts is most effective for maintaining traditional Yangliuqing style. FIGURE 10. Example of Portfolio 3. FIGURE 11. Example of Portfolio 4. When evaluating innovation, Portfolio1 achieved a moderate average score of 6.0, but the variability in scores (ranging from 2 to 9) suggests a mix of conventional and experimental prints, with some either too traditional or too avant-garde. Portfolio2 showed a slight improvement, with an average score of 6.4 and a range from 3 to 9, as the original references allowed for a more balanced creative foundation. Portfolio3 underperformed, with an average score of 5.7 (range 2 to 8), likely due to the complexity of the method or misalignment between the components. Portfolio4 excelled, achieving an average score of 8.7, with a range of 7 to 10, demonstrating that the combination of theme prompts, images, and references facilitated greater creative freedom while still respecting traditional elements. In terms of overall quality—considering factors like line quality, color scheme, and detail—Portfolio1 showed moderate quality, with an average score of 6.3, though the range from 2 to 9 indicates some inconsistencies. Portfolio2 improved with an average score of 6.7 (range 4 to 9), benefiting from the grounding effect of original references. Portfolio3, with an average score of 5.9 (range 3 to 8), underperformed likely due to the method's complexity. Portfolio4 again achieved the highest score of 7.8 (range 6 to 9), demonstrating that the comprehensive approach combining all elements resulted in the most polished, cohesive, and high-quality prints. # Portfolios’ Value Analysis FIGURE 12. Visualization of Portfolios' Value Analysis within Buying and Connotation. The data (details shown in Figure 12) reveals distinct patterns in participants' willingness to buy and their perception of how well each portfolio promotes traditional Yangliuqing culture. Portfolio4 stands out as the most appealing, with the highest counts in the "more willing" (16) and "Very willing" (4) categories for buying willingness, as well as the highest count in the "perfect to pass on" (9) category for connotation. This suggests that its combination of DeepSeek-generated theme prompts, original Yangliuqing prints, and MidJourney-generated prints effectively bridges modern creativity with traditional cultural elements, making it both commercially attractive and culturally resonant. In contrast, Portfolios 1 and 2 perform moderately, with most participants rating them as "average" in both buying willingness and connotation. While they maintain a neutral appeal, they lack the strong cultural or emotional connection needed to inspire higher buying interest or deeper cultural appreciation. Portfolio3, despite its theme-based approach, shows polarization, with some participants finding it appealing while others are strongly disinterested, indicating that its thematic execution may not consistently align with cultural expectations or consumer preferences. In terms of promoting traditional culture, Portfolio4 again excels, as its higher counts in the "perfect to pass on" and "more willing" categories suggest it successfully communicates the value of Yangliuqing traditions in a way that resonates with participants. This portfolio demonstrates how blending original prints with AI-generated elements can create a compelling narrative that honors tradition while embracing innovation. On the other hand, Portfolios 1 and 2 fall short in this regard, as their high counts in the "better to pass on" category indicate a lack of strong cultural or emotional engagement. Portfolio3, while attempting to integrate themes and original prints, fails to evoke a consistent sense of cultural promotion, as evidenced by its lack of responses in the "harder to pass on" category. Overall, the data underscores the importance of balancing modern creative techniques with authentic traditional elements to effectively promote cultural heritage and drive consumer interest. # IV. Discussion Yangliuqing woodblock prints are part of China's intangible cultural heritage, but innovation is currently being hindered by the challenges of preserving their intricate textures, colors, and traditional elements while incorporating creative variation. To address these challenges, we generated portfolios using four methods. The first method involved utilizing DeepSeek-generated key prompts combined with MidJourneygenerated prints. The second method incorporated DeepSeek-generated key prompts alongside original Yangliuqing prints as references, with MidJourney-generated prints. The third method built on DeepSeekgenerated theme prompts and MidJourney-generated theme images, adding mixed original Yangliuqing prints as references and MidJourney-generated prints. Finally, the fourth method combined a mix of DeepSeek-generated theme prompts, MidJourney-generated theme images, original Yangliuqing prints, DeepSeek-generated key prompts, and MidJourney-generated prints. And then evaluated four different portfolios using Fréchet Inception Distance (FID) scores and participant feedback to gauge the effectiveness of various methods for replicating and promoting the traditional art form. Upon analyzing the FID scores, we found that Portfolio 1, with a mean FID score of 245.2 and a high standard deviation of 15.3, demonstrated substantial variability (220.18 – 267.44), reflecting poor consistency and difficulties in capturing the delicate details of Yangliuqing art. The lack of key reference images likely contributed to these inconsistencies, hindering the model’s ability to accurately replicate the traditional style.Portfolio 2, which had a mean FID score of 161.5 and low variability $( \sigma = 5 . 1 \$ ), showed more consistent outputs with scores clustered between 153.88 and 168.91. However, despite its stability, this portfolio lacked the depth and creative variation needed to fully capture the richness of traditional Yangliuqing prints. The absence of specific reference theme images likely limited the model's ability to innovate within the traditional framework.Portfolio 3, with a mean FID score of 194.4 and moderate variability $_ { \mathrm { ~ \scriptsize ~ O ~ } } = 5 . 8 _ { , } ^ { \cdot }$ , exhibited mixed results. While some outputs were acceptable, others showed significant artifacts (scores above 200). This portfolio might benefit from the inclusion of referenced theme images to improve both consistency and creative output.Portfolio 4, emerging as the best performer, had the lowest mean FID score (150.2) and minimal variability $( \sigma = 4 . 9 )$ ). Its scores ranged narrowly $( 1 4 1 . 8 8 -$ 157.78), reflecting exceptional consistency and the best preservation of traditional Yangliuqing features. The superior performance of Further analysis of data from 62 participants, gathered through a questionnaire, revealed that Portfolio 4 consistently outperformed the other portfolios across all categories—theme, traditional style, innovation, and overall quality in terms of lines, color scheme, and details. This highlights the effectiveness of a hybrid approach combining DeepSeek-generated theme prompts, MidJourneygenerated theme images, original Yangliuqing prints, and DeepSeek-generated key prompts. This method not only captured the essence of Yangliuqing themes but also preserved traditional aesthetics, fostered innovation, and achieved high overall quality. The inclusion of original references was crucial in grounding the prints in traditional style, while the addition of theme prompts and images enhanced thematic relevance and creativity. In contrast, the underperformance of Portfolio 3, which lacked any theme elements, suggests that without the integration of key prompts, misalignment and inconsistent results can occur. Therefore, key prompts are essential for achieving success. Regarding participants' willingness to purchase and promote traditional culture, Portfolio 4 emerged as the most effective in driving consumer interest. It had the highest number of participants who were "more willing" (16), "very willing" (4), and deemed it "perfect to pass on" (9). The combination of DeepSeekgenerated prompts, original prints, and MidJourney-generated images resonated strongly with participants, bridging modern creativity with traditional elements. In contrast, Portfolios 1 and 2 were perceived as average, lacking strong cultural or emotional engagement, while Portfolio 3 displayed polarization, indicating inconsistent appeal. This underscores the importance of integrating authentic traditional elements with innovative approaches to effectively promote cultural heritage and attract consumer interest.
Yangliuqing woodblock prints, a cornerstone of China's intangible cultural heritage, are celebrated for their intricate designs and vibrant colors. However, preserving these traditional art forms while fostering innovation presents significant challenges. This study explores the DeepSeek + MidJourney approach to generating creative, themed Yangliuqing woodblock prints focused on the fight against COVID-19 and depicting joyous winners. Using Fr\'echet Inception Distance (FID) scores for evaluation, the method that combined DeepSeek-generated thematic prompts, MidJourney-generated thematic images, original Yangliuqing prints, and DeepSeek-generated key prompts in MidJourney-generated outputs achieved the lowest mean FID score (150.2) with minimal variability ({\sigma} = 4.9). Additionally, feedback from 62 participants, collected via questionnaires, confirmed that this hybrid approach produced the most representative results. Moreover, the questionnaire data revealed that participants demonstrated the highest willingness to promote traditional culture and the strongest interest in consuming the AI-generated images produced through this method. These findings underscore the effectiveness of an innovative approach that seamlessly blends traditional artistic elements with modern AI-driven creativity, ensuring both cultural preservation and contemporary relevance.
[ "cs.GR", "cs.CL", "cs.CY" ]
# I. INTRODUCTION The rapid evolution of quantum software frameworks [16] presents a unique challenge for developers maintaining code across versions [10]. This issue is particularly evident in Qiskit, one of the most widely adopted platforms for quantum programming. The recent release of version 2.0 introduced substantial changes that affect backward compatibility. Major releases, typically published on an annual basis, often include significant modifications ranging from API deprecations to more fundamental architectural updates. As a result, developers must invest considerable effort in understanding and adapting their existing code bases to align with the latest version. Migrating code between Qiskit versions is not only timeconsuming but also error-prone [2], [7], [9], [15], especially for developers who are not deeply familiar with the internal evolution of the framework. Developers often find their programs broken after library updates [8], [11], [14], leaving them reading documentation and release notes to understand a new set of problems in once functional code [17], [21]. This raises a key question: ’Can large language models (LLMs) assist in the process of Qiskit code migration by leveraging structured knowledge about breaking changes?’. On the other hand, as the Qiskit ecosystem evolves, new versions often introduce not only breaking changes but also enhancements aimed at improving performance, usability, and modularity of quantum algorithms. These changes, while beneficial, may not be immediately adopted by practitioners maintaining legacy code or developing new applications based on outdated paradigms. Consequently, beyond simply ensuring compatibility, there is a growing need to leverage these improvements to optimize existing quantum software. This raises a compelling question in the context of Quantum Software Engineering (QSE): can large language models (LLMs), when equipped with domainspecific migration knowledge, go beyond syntactic adaptation and suggest substantive improvements that reflect best practices and leverage newly introduced functionalities? In this work, we explore a novel methodology for refactoring Qiskit code using LLMs guided by a domain-specific taxonomy of migration scenarios. The taxonomy, developed in Sua´rez et. al [19], was created from Qiskit’s official documentation through both manual and LLM-assisted processes. For the version in our experiment, it consists of approximately 43 representative cases, covering categories such as deprecations, new features, module restructuring, renamed classes, parameter changes, and architectural shifts. Compared to raw documentation, the taxonomy offers a condensed and structured representation of migration knowledge that is more readily digestible by both humans and LLM models. Our method involves providing an LLM with the migration taxonomy, a synthetic Python source file in a known version of Qiskit library and has a known set of migration scenarios present, and a prompt asking the model to identify instances of the migration scenarios within the code. This step allow us to evaluate the performance of the model to identify the migration scenarios present in the taxonomy. In a later stage, the LLM is also asked to propose migrated versions of the identified segments. To assess the feasibility and accuracy of this approach, we conducted two sets of experiments: the first focused on identifying scenario instances in real-world and synthetic code examples; the second evaluated the quality of the migration suggestions on synthetic inputs. In both cases, the results were manually reviewed for correctness and relevance, and the model was required to explicitly reference the applicable migration scenario from the taxonomy. While the use of LLMs for code transformation is becoming increasingly common [3], [7], [23], our work distinguishes itself by anchoring the model’s reasoning in a structured, domain-specific migration scenarios described in the taxonomy. This enables more targeted analysis than general-purpose prompting alone. Furthermore, thanks to recent advances in LLM context length, we are able to input both the taxonomy and source files in a single prompt, eliminating the need of advanced techniques, like RAG or elaborate chunking strategies. The contributions of this work are twofold: (1) we present a novel methodology for combining domain-specific migration knowledge with LLM capabilities to refactor Qiskit code, and (2) we provide experimental evidence demonstrating that LLMs can effectively identify and resolve migration issues in a structured and interpretable way, when provided with structured domain specific knowledge. The remainder of this paper is organized as follows: Section II reviews related work on software migration and LLM-based code transformation. Section III describes the Qiskit migration taxonomy and experimental setup. Section IV presents our methodology. Section V present the results and insights from our evaluation. Finally, Section VI concludes with a discussion of the results and directions for future work. # II. RELATED WORK The increasing complexity of quantum software, coupled with the rapid evolution of development frameworks like Qiskit, has exposed serious challenges in code maintenance, especially for tasks such as API migration and adaptation across versions. Large Language Models (LLMs) are emerging as versatile tools for assisting with these tasks, offering potential solutions in program explanation, generation, and transformation. While prior studies have explored individual capabilities of LLMs, few have directly examined their utility in coping with evolving quantum SDKs—a critical bottleneck in the scalability and longevity of quantum software. Here we discuss the contributions that are most pertinent to and aligned with the objectives of this study. Frequent API changes in libraries like Qiskit introduce semantic drift, deprecated constructs, and updated usage patterns that break existing code. Yet, tool support for automatic migration is virtually nonexistent. Almeida et al. [1] investigated the use of GPT-4 for library migration tasks in Python, showing that carefully designed prompts (e.g., oneshot and chain-of-thought) significantly improve correctness during SQLAlchemy 1 version upgrades. Although their domain was classical, the methodology is highly relevant to quantum SDK migration, where breaking changes often go undocumented or under-specified. In contrast, our work focuses specifically on quantum programs and exposes how these prompt strategies perform under domain-specific constraints, such as quantum software engineering semantics. A few recent efforts have begun to explore how LLM strategies can support quantum code migration. Asif et al. [2], for example, present PennyLang, which uses LLMs to translate quantum programs from Pennylane to Qiskit, focusing on inter-framework migration. Their approach combines prompt engineering with retrieval techniques to align equivalent quantum operations across SDKs. In contrast, our work addresses the challenges of intra-framework migration within Qiskit itself, where semantic changes and API evolution introduce subtler, version-specific obstacles that require deep contextual understanding. One promising direction for improving quantum code migration is Retrieval-Augmented Generation (RAG), introduced by Lewis et al. [13]. RAG augments language models with a retrieval component that dynamically fetches relevant information—such as documentation or code examples—from an external corpus during inference. This architecture allows the model to generate more accurate and up-to-date responses, especially in domains where knowledge evolves rapidly, without requiring model retraining. In contrast to approaches like RAG, our study operates in a static, zero-shot setting. We evaluate LLMs based solely on their pre-trained parametric memory, without any retrieval or fine-tuning to expose how well they handle quantum code migration tasks when relying solely on internalized knowledge. Refactoring-oriented studies, such as that by Cordeiro et al. [4], show that LLMs tend to outperform human developers when dealing with systematic code smells, yet often fall short in more context-dependent scenarios. This limitation is particularly relevant to code migration, where understanding nuanced and version-specific changes is critical. Our work extends this perspective by focusing on the correctness and clarity of explanations in version-sensitive quantum code, highlighting how LLMs can misinterpret logic due to undocumented or implicit semantic shifts across different Qiskit versions. Zhao’s catalog of quantum-specific refactorings [24] emphasizes the need for automated transformation tools that respect quantum constraints like entanglement and measurement, which are often altered across Qiskit versions. Our study brings an orthogonal contribution by highlighting how LLMs struggle to explain these quantum-specific patterns when they appear in unfamiliar or legacy code structures, thereby underlining a prerequisite for safe refactoring. Although migration requires transformation, it relies on a deep understanding of both the legacy and target code. D’Aloisio et al. [5] explored LLMs’ ability to explain quantum algorithms in OpenQASM, demonstrating consistent explanation quality under constrained contexts. Sua´rez et al. [19] extend their work by including newer LLMs (Qween, Deepseek, GPT-4), broader datasets, and a more comprehensive evaluation protocol that includes Qiskit-specific constructs and prompt variations across versions. We follow up this work using the generated taxonomies to enrich the LLM context with domain specific information to identify migration scenarios between Qiskit versions. Dupuis et al. [7] fine-tuned language models on Qiskit code to optimize quantum-specific code generation, achieving state-of-the-art results on the Qiskit HumanEval benchmark. In contrast to their focus on model specialization for generative tasks, our work targets support for manual refactoring by comparing our model’s performance—using a migration case taxonomy—with that of general-purpose models without domain-specific fine-tuning, to propose useful transformations in response to Qiskit’s syntactic and semantic evolution. At the intersection of quantum computing and migration, we also find the work by Zhao [24], which addresses the challenges of refactoring quantum programs written in Q#. Although primarily centered on maintainability and efficiency rather than migration across versions, it does propose a useful catalog of refactorings based on algorithmic patterns — conceptually similar to our taxonomy. Furthermore, much like their proposed QSharp Refactoring Tool, our work moves toward building a hybrid tool for partially automated quantum code migration. However our work focuses on Qiskit which is a library for Python programming language. There is evidence that suggest LLMs may perform differently by language [22]. The work by Sahoo et. al [18] provides a very complete review on prompt engineering and the most relevant associated works, considering query techniques, contextual refinement, and precision on the associated metrics, as well as prompt iteration and rephrasing. Recent prompting innovations such as Rephrase-and-Respond (RaR) by Deng et al. [6] and Chainof-Thought prompting [12] have shown measurable benefits in task performance across ambiguous or complex queries. We test these prompting strategies under the added stressor of Qiskit version drift and demonstrate that their effectiveness degrades in the absence of domain adaptation—highlighting new boundary conditions for prompt engineering in quantum code tasks. We also consider recent work related to Qiskit migration tasks [25]. While it shares some overlap with our focus—particularly in emphasizing the manual difficulty, time cost, and error-prone nature of migrating quantum code—it takes a different approach by leveraging an indexing system (Kythe) to manage code references. Although it also classifies migration scenarios and uses the Gemini model, its evaluation does not take place within a complex quantum computing context. Instead, it relies heavily on human intervention for edge cases and final validations. Furthermore, the effectiveness of its technique is tightly coupled to the model’s context window, whereas our approach is independent of such limitations, as the base taxonomy is precomputed and model-agnostic. While prior work has established the potential of LLMs in code explanation, refactoring, and generation, there remains a substantial gap in their application to API migration in quantum software, especially within fast-evolving ecosystems like Qiskit. Our study addresses this need by benchmarking LLMs on their ability to explain and adapt code in the face of version drift. By combining model comparisons, explanation quality evaluation, and prompt variation experiments, we offer practical insights into LLM readiness for version-aware quantum software support. # III. TAXONOMY OF QISKIT MIGRATION SCENARIOS To effectively guide large language models (LLMs) in the task of Qiskit code migration, we rely on a structured taxonomy of migration scenarios. This taxonomy captures the changes introduced between Qiskit versions. These taxonomies need to be created per version release. I.e.: the taxonomy for version 0.46 is based on the change logs, documentation and release notes for that Qiskit version. The taxonomy was specifically designed to bridge the gap between unstructured release notes and actionable migration knowledge for automated refactoring systems. The process involved in the creation of these taxonomies is described in detail in Sua´rez et al. [19]. In a nutshell, the taxonomies can be created manually or using LLM assisted technologies. In both cases the origin of the migration scenarios is the official Qiskit documentation and release notes. The architecture of the code repository is described in Figure 1. Fig. 1. Architecture of the code repository used for the experiments. There are user and system prompts for both experiments, with and without taxonomy. The were a total of 50 invocations to the model, two per each of the 25 code snippets. While the work described above focuses on asses the ability of LLMs to summarize the changes between Qiskit versions in a structured manner and automatically generate the taxonomy, the focus of our study is to evaluate the impact of explicitly providing such structured information to LLMs in order to assist quantum software practitioners in identifying and performing code migrations. To assess the influence of the taxonomy on the LLMassisted migration process, we focused on synthetic yet representative migration cases targeting Qiskit version 0.46, constructed from known changes introduced in prior versions. Thus, we created a taxonomy targeting version 0.46 using ChatGPT 4.1. The results were refined and manually verified by experts in quantum computing programming. TABLE I EACH SCENARIO IS FRAMED BY THE LLM USING THESE DIMENSIONS ACCORDING TO THE PROVIDED PROMPT. THE DEGREE OF DIFFICULTY AND IMPACT ON SE/QSE, WHILE RETURNED BY THE TOOL WERE NOT USED IN THE PRESENT WORK AS REQUIRE MORE DISCUSSION. The resulting taxonomy is a markdown file with columns described in Table I. The generated taxonomy consist of 43 scenarios, of which 29 are deprecations, 6 describe new features and 8 describe structural changes. # IV. METHODOLOGY To evaluate the impact of a structured taxonomy on the effectiveness of large language models (LLMs) in assisting Qiskit code migration, we designed a controlled experiment using synthetic yet representative code samples. A total of 25 Python code snippets were manually crafted to emulate realistic usage patterns while embedding specific migration scenarios relevant to Qiskit version 0.46. Each snippet consisted of between 9 to 30 lines of code and was designed to reflect real-world practices such as varied import styles, inline comments, and multi-line constructs. All snippets targeted deprecated features or modules, or were constructed to expose the model to known patterns from the taxonomy. The number of cases per source file is between one and two but the associated lines to address a scenario may differ. For example, for the deprecation of the function execute() from qiskit module, the source code contains a line for the import sentence and the usage sentence in which the function is used. In this case the scenario is the same but multiple lines are affected. For each code sample, we conducted two independent interactions with the same LLM: one where the full migration taxonomy was included as context, and one where it was omitted. In both cases, the prompts asked the model to identify migration issues and provide a structured response detailing the affected lines, a brief categorization description of the scenario, the artifact involved, and the suggested refactoring. When the taxonomy was included, the prompt additionally required the model to reference the corresponding scenario identifier or indicate with an ”\*” if no match was found, indicating the model found a migration scenario not present in the taxonomy. Line numbers were pre-inserted in the code through an automated preprocessing step to improve referential clarity in the output. To standardize the model outputs and facilitate comparison, we instructed the LLM to return its migration suggestions in a structured markdown table format. The expected output varied slightly depending on whether the taxonomy was provided. Notice the case in which no taxonomy is provided, in which is impossible to associate a scenario ID. In the case with taxonomy, the output was structured using six columns: • Line — the code line number, added programmatically before the prompt. • Code — the exact line of source code being analyzed. Scenario ID — the identifier of the migration scenario in the taxonomy, or an asterisk $( { \star } )$ if no match was found. • Scenario — a synthesized description combining the taxonomy’s Type and Summary fields (e.g., Deprecation $$ function name() deprecated). If the update is not mandatory for compatibility, the label (optional) was added. Artifact — the affected component, drawn directly from the taxonomy’s Artifacts field. Refactoring — the recommended code change for versions $\geq 0 . 4 6$ , left blank if the model was unsure or no clear fix was applicable. In the case without taxonomy, the output was structured using five columns: Line — the code line number. • Code — the exact source code line. Scenario — a short description of the change and the affected artifact (e.g., Deprecation $$ function name() method deprecated), including the (optional) label if applicable. Artifact — the module, method, or parameter involved in the migration. • Refactoring — the proposed code change for compatibility with version $\geq 0 . 4 6$ . All experiments were conducted using OpenAI’s $\mathtt { g p t - 4 - 0 6 1 3 }$ model via the chat.completions API endpoint, with a temperature setting of 0.1 and default values for all other parameters. No chain-of-thought prompting, function calling, or external tools were used. Furthermore, the model was not allowed internet access during the experiments. Although the prompts explicitly stated that the model could rely on its prior knowledge (the phrase ”using your prior knowledge” was included into the prompt), we recognize that the specific versions of Qiskit included in its training corpus are not transparent. As such, we assume the model had access to documentation up to Qiskit versions 0.46, 1.0, and possibly 2.0 at the time of training, which introduces an uncontrollable variable discussed later in the discussion. To assess the quality of the responses, two quantum computing experts manually evaluated each model suggestion. Each individual line-level recommendation was scored using a color-coded rubric: green for correct suggestions (OK); yellow for minor issues easily fixed on inspection (OK-); orange for valid Python but not aligned with version 0.46 or not adapted to the provided code $\left( \mathrm { X } + \right)$ ; and red for incorrect or misleading suggestions or when the scenario is misidentified (X). The review process was performed independently without formal inter-rater reliability measures, though all scores were cross-checked for consistency. Also, in order to account for the amount of false negatives, the number of not reported migration changes was verified. We provide these numbers in the result section as a ratio of missed changes over the total expected changes. See Figure 2. Value Color Description X Red The sduetgegcetsetdedscsecneanriaor odoiseisnncotrraepctploy,r ionratphpersouprgigates.ted refactoring is incorrect. $\cdot$ Orange TDhete sctcieonaorifoeirsroinceorurescstcoerntAahsreisoerteosfraacrttoifriancgt iosr irneafadcetqouriantge.not applicable. Correct scenario detection, but the suggested refactoring needs adjustments. OK- Yellow Applying the suggested refactoring directly does not solve the migration problem. Correct scenario detection and appropriate and functional suggested refactoring. OK Green It is relatively straightforward to apply the suggested refactoring and solve the migration problem. This evaluation framework allowed us to measure whether prompting with a structured taxonomy leads to better migration outcomes than relying solely on the LLM’s pre-trained knowledge. Ultimately, our goal is to determine whether the overhead of building a structured taxonomy is justified by tangible improvements in LLM-assisted migration support. # V. RESULTS We evaluated the impact of providing a structured taxonomy on the performance of a large language model (LLM) when identifying and assisting with migration scenarios in Qiskit code. The evaluation focused on two key dimensions: scenario identification and the quality of refactoring suggestions. # Scenario Identification A total of 25 synthetic code samples were used in the experiments. Of these, 21 contained scenarios that required code refactoring due to incompatibility with Qiskit version 0.46. The remaining 4 were already compatible and served as negative test cases. Across the 21 samples that required refactoring, a total of 81 lines of code were expected to be modified. With the taxonomy, the model correctly identified 12 of the 21 refactoring scenarios. • Without the taxonomy, 10 of the 21 scenarios were correctly identified. • In both experiments, only 1 out of the 4 non-refactoring scenarios was correctly recognized as such (true negative). • In the remaining 3 non-refactoring cases, the model produced incorrect migration suggestions (false positives). These results indicate a moderate improvement in scenario identification when the taxonomy is included in the prompt. # Refactoring Suggestions We also assessed the quality of the refactoring suggestions provided by the model for the 81 lines of code that required changes. When using the taxonomy, the model correctly refactored 50 out of 81 lines (true positives). • Without the taxonomy, the model produced correct suggestions for only 29 out of 81 lines. The taxonomy-assisted model produced 40 incorrect or misleading changes (false positives), compared to 61 incorrect suggestions without the taxonomy. • The model failed to detect or provide incorrect suggestions on 31 lines that required refactoring when guided by the taxonomy, while it missed 52 of such lines in the absence of the taxonomy (false negatives). These results suggest that access to a structured taxonomy significantly improves both the precision and recall of the LLM in migration tasks. The taxonomy helps reduce the number of incorrect suggestions while also enabling the model to identify a higher number of correct refactorings. All scripts, associated data and results are contained in our GitHub repository and are fully accessible without restriction [20]. The results achieved, described above, are summarized in Table II. # VI. DISCUSSION Our work shows the role of the taxonomy as a condensed representation of migration scenarios. This structured format can support developers in adapting their code by enabling large language models (LLMs) to more effectively identify migration instances. Because the taxonomy is significantly more compact than the raw documentation from which it is derived, it allows for a greater portion of the model’s context window to be allocated to actual code, without the need for techniques such as Retrieval-Augmented Generation (RAG). While comparing our approach to alternatives—such as including unprocessed documentation in the prompt or employing retrieval-based augmentation—is beyond the scope of this work, it is worth noting that the documentation associated with a single version can easily exceed the context length of contemporary LLMs. TABLE II SUMMARY OF SCENARIO IDENTIFICATION AND REFACTORING RESULTS WITH AND WITHOUT TAXONOMY SUPPORT. Our work only focuses on the migration to a single version of Qiskit from code compatible with the same minor releases (i.e: from 0.41-0.45 to 0.46). This approach aligns with the recommendation in the Qiskit documentation, which advocates for incremental version migration to ensure greater reliability and maintainability. Migration across non-consecutive versions, particularly in the case of major version changes, introduces additional complexity and requires a more rigorous and individualized analysis. Another possible avenue of investigation involves prompting the model to directly produce a migrated version of the input code. Although our primary goal was to evaluate the effectiveness of the taxonomy in helping the model to identify migration scenarios, applying the same evaluation framework to the refactored output code proved more complex and was therefore not adopted in this study. Nevertheless, our methodology could be adapted for such an evaluation in future work, and exploring the impact of the taxonomy on the quality of migrated code remains a compelling direction. Given the observed benefits and the growing prevalence of LLM-assisted coding tools, it may be advisable for maintainers of the Qiskit library to consider appending structured taxonomies to their release notes. This official source of taxonomies may prevent the blooming of different not official taxonomies. In our observations, improvements in the quality and completeness of the taxonomy led to better performance by the model in specific cases—particularly when documentation was ambiguous or incomplete. However, the impact of taxonomy quality on model performance was not quantitatively assessed in this study and warrants further investigation. Finally, prompt engineering also emerges as a relevant factor. In our case, the prompts did not explicitly instruct the model to adapt the refactoring to the specific context or variable names of the input code. As a result, we observed occasional inconsistencies, such as the reuse of placeholder variables copied verbatim from the taxonomy examples. Refining the prompt design to guide the model more precisely in contextual adaptation is an open opportunity for optimization. For example, relax or tight the prompt to allow or disallow flexibility in the responses. Regarding the refactoring dimension of the experiment, the results reveal a nuanced balance in the quality of the suggestions provided by the model. In some instances, the model produced overly simplistic recommendations that failed to address the underlying migration issue, often requiring additional searches and manual corrections. For example, in cases involving deprecated module imports, the model occasionally suggested their direct removal without considering their subsequent usage within the code, when a proper replacement would have been more appropriate. Conversely, in other cases, the proposed refactorings proved problematic due to poor adaptation to the specific context of the code, either by being too generic or misaligned with the intended functionality. Moreover, some suggestions exhibited an excessive level of specificity or were poorly suited to the actual migration need, ultimately failing to solve the compatibility issue. These findings suggest that achieving an effective balance between precision and flexibility in code migration tasks may depend heavily on the clarity and specificity of the prompt provided to the model—commonly referred to as prompt engineering [18]. Furthermore, improvements in the comprehensiveness and granularity of the taxonomy used to guide the model are also likely to enhance the quality and applicability of the refactoring recommendations. It is also worth mentioning that this study focused solely on the GPT-4 model (specifically ‘gpt-4-0613‘). Evaluating other models—especially those designed with a stronger orientation toward code understanding and generation, such as Codex, CodeLlama, or Claude for code—could provide valuable insights into the generalization of our findings. Exploring how different architectures handle taxonomy-based guidance remains an interesting and promising line of future work. All in all, our results suggest that the use of a taxonomy is useful to guide the process of identifying migration scenarios as well as to inform potential refactoring suggestions. At the same time, we observed that the model is able to effectively complement the structured information provided by the taxonomy with its own prior knowledge, as inferred from the prompt instructions. The refactoring suggestions were, in many cases, well adapted to the specific context of the input code. The experiments confirm the model’s accuracy in detecting the lines of code that require refactoring and in referencing the corresponding scenario from the taxonomy (column Scenario Id). While the model typically analyzes code on a line-byline basis, it also shows the ability to identify and coherently group logically related segments. On the other hand, we observed that the model does not attempt to identify refactoring scenarios that lie outside the target version explicitly specified in the prompt. In our study, this target was version 0.46.0, and the model consistently scoped its responses to that version. We regard this experimental study as part of a broader line of inquiry, building upon our previous work (Sua´rez et. al [19]) aimed at understanding the key factors that influence the accuracy of large language models (LLMs) in the context of quantum software refactoring. This includes identifying the strengths of current approaches as well as areas requiring improvement, with the goal of outlining the scope and practical applicability of LLMs for quantum software engineering (QSE) tasks. In particular, we plan to enhance the structure, categorization, and supporting metadata used in the automatic generation of migration taxonomies. We hypothesize that the quality of this process will have a direct impact on both the precision of scenario identification and the relevance of the refactoring recommendations generated by LLMs. In addition, we envision the development of complementary tools focused on explainability and actionable insights, tailored to the specific improvements introduced in each new Qiskit release. These tools would support software teams in maintaining compatibility with cutting-edge versions while mitigating technical debt. Furthermore, we aim to advance the development of impact metrics that quantify the effect of refactoring in quantum software, in alignment with the metadata previously integrated into our taxonomy. Such metrics will allow for a more systematic assessment of the efficiency gains derived from updated algorithms, the adoption of new framework features, and the transparency of code modifications. As part of future work, we intend to define new metrics and refine existing ones to better capture the trade-offs involved in the refactoring process. Finally, we plan to extend the current evaluation pipeline to include recently released versions of Qiskit, particularly version 2.0, for which it is certain that the model lacks training data. This extension will allow us to assess model performance in scenarios beyond its training cutoff, offering valuable insights into its generalization capacity and adaptability to unseen migration requirements. # REFERENCES [1] Aylton Almeida, Laerte Xavier, and Marco Tulio Valente. Automatic Library Migration Using Large Language Models: First Results. In Proceedings of the 18th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, pages 427–433, October 2024. arXiv:2408.16151 [cs]. [2] Haider Asif, Abdul Basit, Nouhaila Innan, Muhammad Kashif, Alberto Marchisio, and Muhammad Shafique. PennyLang: Pioneering LLMBased Quantum Code Generation with a Novel PennyLane-Centric Dataset, March 2025. arXiv:2503.02497 [cs]. [3] Nils Baumgartner, Padma Iyenghar, Timo Schoemaker, and Elke Pulvermu¨ller. AI-Driven Refactoring: A Pipeline for Identifying and Correcting Data Clumps in Git Repositories. Electronics, 13(9):1644, January 2024. Number: 9 Publisher: Multidisciplinary Digital Publishing Institute. [4] Jonathan Cordeiro, Shayan Noei, and Ying Zou. An Empirical Study on the Code Refactoring Capability of Large Language Models, November 2024. arXiv:2411.02320 [cs]. [5] Giordano d’Aloisio, Sophie Fortz, Carol Hanna, Daniel Fortunato, Avner Bensoussan, E˜naut Mendiluze Usandizaga, and Federica Sarro. Exploring LLM-Driven Explanations for Quantum Algorithms. In Proceedings of the 18th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM ’24, pages 475–481, New York, NY, USA, October 2024. Association for Computing Machinery. [6] Yihe Deng, Weitong Zhang, Zixiang Chen, and Quanquan Gu. Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves, April 2024. arXiv:2311.04205 [cs]. [7] Nicolas Dupuis, Luca Buratti, Sanjay Vishwakarma, Aitana Viudes Forrat, David Kremer, Ismael Faro, Ruchir Puri, and Juan Cruz-Benito. Qiskit Code Assistant: Training LLMs for generating Quantum Computing Code, May 2024. arXiv:2405.19495 [quant-ph]. [8] Nam Huynh and Beiyu Lin. A Survey On Large Language Models For Code Generation. March 2025. [9] Ali Javadi-Abhari, Matthew Treinish, Kevin Krsulich, Christopher J. Wood, Jake Lishman, Julien Gacon, Simon Martiel, Paul D. Nation, Lev S. Bishop, Andrew W. Cross, Blake R. Johnson, and Jay M. Gambetta. Quantum computing with Qiskit, June 2024. arXiv:2405.08810 [quant-ph]. [10] Luis Jim´enez-Navajas, Ricardo P´erez-Castillo, and Mario Piattini. Code generation for classical-quantum software systems modeled in UML. Softw Syst Model, January 2025. [11] Ranim Khojah, Francisco Gomes de Oliveira Neto, Mazen Mohamad, and Philipp Leitner. The Impact of Prompt Programming on FunctionLevel Code Generation, December 2024. arXiv:2412.20545 [cs]. [12] Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 22199–22213. Curran Associates, Inc., 2022. [13] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K¨uttler, Mike Lewis, Wen tau Yih, Tim Rockt¨aschel, Sebastian Riedel, and Douwe Kiela. Retrievalaugmented generation for knowledge-intensive nlp tasks, 2021. [14] Kaixin Li, Qisheng Hu, Xu Zhao, Hui Chen, Yuxi Xie, Tiedong Liu, Qizhe Xie, and Junxian He. InstructCoder: Instruction Tuning Large Language Models for Code Editing, February 2024. arXiv:2310.20329 [cs]. [15] Paul D. Nation, Abdullah Ash Saki, Sebastian Brandhofer, Luciano Bello, Shelly Garion, Matthew Treinish, and Ali Javadi-Abhari. Benchmarking the performance of quantum computing software, February 2025. arXiv:2409.08844 [quant-ph]. [16] John Preskill. Quantum Computing in the NISQ era and beyond. Quantum, 2:79, August 2018. arXiv:1801.00862 [quant-ph]. [17] Nils Quetschlich, Lukas Burgholzer, and Robert Wille. MQT Bench: Benchmarking Software and Design Automation Tools for Quantum Computing. Quantum, 7:1062, July 2023. arXiv:2204.13719 [quantph]. [18] Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Mondal, and Aman Chadha. A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications, February 2024. arXiv:2402.07927 [cs]. [19] Jos´e Manuel Sua´rez, Lu´ıs Mariano Bibb´o Bibbo´, Joaquı´n Bogado, and Alejandro Fernandez. Taxonomy of migration scenarios for qiskit refactoring using llms. February 2025. Accepted for publication - JAIIO 54. [20] Jos´e Manuel Sua´rez, Joaquin Bogado, Luis Mariano Bibb´o, and Alejandro Fern´andez. Software associated to automatic qiskit code refactoring using large language models. https://github.com/jwackito/qiskit experiments tlisc, June 2025. [21] Nikolaos Tsantalis, Ameya Ketkar, and Danny Dig. RefactoringMiner 2.0. IEEE Transactions on Software Engineering, 48(3):930–950, March 2022. Conference Name: IEEE Transactions on Software Engineering. [22] Lukas Twist, Jie M. Zhang, Mark Harman, Don Syme, Joost Noppen, and Detlef Nauck. Llms love python: A study of llms’ bias for programming languages and libraries, 2025. [23] Zhiqiang Yuan, Weitong Chen, Hanlin Wang, Kai Yu, Xin Peng, and Yiling Lou. TRANSAGENT: An LLM-Based Multi-Agent System for Code Translation, October 2024. arXiv:2409.19894 [cs]. [24] Jianjun Zhao. On Refactoring Quantum Programs, June 2023. arXiv:2306.10517 [cs]. [25] Celal Ziftci, Stoyan Nikolov, Anna Sjo¨vall, Bo Kim, Daniele Codecasa, and Max Kim. Migrating Code At Scale With LLMs At Google, April 2025. arXiv:2504.09691 [cs].
As quantum software frameworks evolve, developers face increasing challenges in maintaining compatibility with rapidly changing APIs. In this work, we present a novel methodology for refactoring Qiskit code using large language models (LLMs). We begin by extracting a taxonomy of migration scenarios from the different sources of official Qiskit documentation (such as release notes), capturing common patterns such as migration of functionality to different modules and deprecated usage. This taxonomy, along with the original Python source code, is provided as input to an LLM, which is then tasked with identifying instances of migration scenarios in the code and suggesting appropriate refactoring solutions. Our approach is designed to address the context length limitations of current LLMs by structuring the input and reasoning process in a targeted, efficient manner. The results demonstrate that LLMs, when guided by domain-specific migration knowledge, can effectively assist in automating Qiskit code migration. This work contributes both a set of proven prompts and taxonomy for Qiskit code migration from earlier versions to version 0.46 and a methodology to asses the capabilities of LLMs to assist in the migration of quantum code.
[ "cs.SE", "cs.AI", "cs.ET" ]
# 1 Introduction Many real-world applications require large language models to integrate scattered information and infer logical answers to novel questions. For instance, an AI assistant supporting human resource specialists in determining an employee’s tax rate must combine information about the employee’s marital status and the spouse’s residency, as this affects the application of tax law. Such information often originates from scarce data sources or resides in separate systems where regulatory constraints limit direct access. Hence, compositional generalization, the ability of a model to create new combinations of known elements, is essential for the quality of such applications. As pretraining and fine-tuning become standard practices in Large Language Model (LLM) development, reusing shared model weights from foundation models and their fine-tuned variants has emerged as a practical strategy for generalization in data-scarce scenarios. Unlike Federated Learning [McMahan et al., 2017], this so-called model merging [Raffel, 2023] approach passively operates on shared model weights without coordinated training rounds. As parameter-efficient fine-tuning methods gain popularity, combining fine-tuned modules, especially low-rank adapters (LoRAs) [Hu et al., 2021], has emerged as a data-free alternative to enhance model capabilities [Beck et al., 2022, Huang et al., 2024, Zhao et al., 2024a, Ostapenko et al., 2024, Prabhakar et al., 2024, Zhao et al., 2024b, Yadav et al., 2025]. Users can exchange and merge LoRA updates at inference time like plug-and-play libraries. Consequently, this idea has sparked a proliferation of novel methods for reusing fine-tuned LoRA weights for new tasks [e.g. Huang et al., 2024, Ostapenko et al., 2024, Zhao et al., 2024a, Beck et al., 2022, Zhang et al., 2025]. These approaches are appreciated for their computational and economic efficiency. However, they are often developed and validated under varying experimental conditions, with differing assumptions about system architecture, data availability, usage scenarios, and computational budgets. While recent work has addressed such inconsistencies in combining entire fine-tuned foundation models [Tam et al., 2024], the various design choices for merging or routing LoRA modules have only been surveyed [Yadav et al., 2025], leaving many questions unanswered. Furthermore, Large Language Models (LLMs) gain knowledge through pretraining, while supervised fine-tuning of instructionfollowing tasks teaches them the style or format for user interaction [Zhou et al., 2023]. Consequently, fine-tuning LLMs with new knowledge often leads to hallucinations [Gekhman et al., 2024, Ghosal et al., 2024]. Low-Rank Adaptations (LoRAs) are inherently limited in their expressiveness [Zeng and Lee, 2024] and can reduce chain-of-thought (CoT) reasoning abilities [Lobo et al., 2025]. The effectiveness of combining LoRA modules to generalize to new tasks is a critical concern. This position paper presents theoretical analysis and empirical findings on synthetic reasoning tasks to demonstrate the limitations of merging or routing LoRAs for zero-shot generalization to unseen tasks. Our findings indicate that combining LoRAs is ineffective for new tasks unless those tasks are already represented in the fine-tuning datasets. Low-level statistics, such as the familiarity of entities or Chain-of-Thought templates, serve as crucial bridges for integrating disjoint information among LoRAs to generate logical answers to novel queries. Understanding these mechanisms is crucial for designing systems that can effectively reuse LoRAs and create suitable fine-tuning datasets. Designers must consider the specific applications for LoRA reuse, as curated training data is vital for successful combination. Our position hence is: We advocate for a shift in focus from algorithmic innovation to a rigorous understanding of the boundaries of adapter-based merging or routing, leveraging synthetic data and theoretical analysis. In the following sections, we begin with a discussion of related work and some overlooked perspectives. We then present theoretical analysis and empirical results that reveal the limitations of combining LoRAs, using synthetic two-hop reasoning and math problem setups. After discussing some alternative views, we conclude with our position on the effectiveness of LoRA combination. # 2 Discussion on Related Work Perspectives LoRA modules [Hu et al., 2021] have emerged as a privacy-friendly, data-free method for sharing model capabilities, allowing users to exchange LoRA updates and merge them at inference time like plug-and-play libraries [Beck et al., 2022, Huang et al., 2024, Zhao et al., 2024a, Ostapenko et al., 2024, Prabhakar et al., 2024, Zhao et al., 2024b, Yadav et al., 2025]. However, many recycling methods require examples from unseen tasks to estimate merging weights or routers, raising questions about how much successful generalization can be attributed to the LoRAs themselves. This highlights the need for mechanisms ensuring effective LoRA combination under limited data access. Weight averaging is a popular method for recycling LoRAs, inspired by findings that fine-tuned models remain in the same loss basin as pretrained weights [Neyshabur et al., 2020]. Task vectors, extracted from the difference between pretrained and fine-tuned model weights, can steer model behavior through arithmetic operations [Ilharco et al., 2023]. Recent algorithms focus on resolving merge conflicts [Yadav et al., 2023], randomly pruning redundant parameters [Yu et al., 2024], and estimating weights for averaging LoRAs [Huang et al., 2024, Prabhakar et al., 2024], but the mechanisms enabling successful generalization remain unexplored. Mixture of Experts (MoE) architecture leverages finetuned LoRA adapters for novel domains. However, many methods require the domain data to setup the MoE for retrieving experts or training routers [Chronopoulou et al., 2023, Zhao et al., 2024b, Jang et al., 2023]. Arrow [Ostapenko et al., 2024] is a notable exception, which routes LoRAs directly based on similarity between query tokens and singular values of LoRA experts. What can be recycled from a hub of LoRAs? When privacy is crucial, options are limited, especially for zero-shot generalization without training data. The latent logic in pretraining corpus or term frequency may play roles in combining LoRAs for zero-shot generalization. Scaling language models has shown emergent abilities for zero-shot reasoning [Wei et al., 2022, Kojima et al., 2022], suggesting latent logical knowledge acquisition. Task vectors demonstrate analogical reasoning through arithmetic operations, but their effectiveness may depend on term co-occurrence frequency in pretraining data [Merullo et al., 2025]. If the low-rank adapters are linear approximation of the fine-tuned tasks, such term-frequency effect in pretraining may set the limit for the combination of LoRAs to generalize to tasks that are underrepresented in the pretraining dataset. An alternative is that observed generalization performance via merging or routing LoRAs reflects superficial patternmatching rather than genuine compositionality. Empirical studies indicate LLMs rely on token-level cues, with small lexical changes affecting reasoning performance [Mirzadeh et al., 2024, Li et al., 2024]. LLMs struggle with latent multi-hop reasoning, relying on explicit prompting to bridge compositionality gaps [Press et al., 2023, Balesni et al., 2025]. Synthetic reasoning tasks thus play a key role in assessing compositional generalization, which indicates how effectively LoRA combination transfers to entirely novel tasks. We began with theoretical analysis in 2-hop reasoning scenarios, mimicking real-world cases where models answer questions about unseen entity relationships. Using synthetic data to avoid pretraining contamination, we tested whether combining LoRAs without further training enables zero-shot solutions for 2-hop reasoning and complex math problems. Finally, we repeated experiments on models pretrained with different methods (e.g., chain-of-thought distillation, math corpus) to identify conditions for successful LoRA reuse. # 3 Theoretical Analysis Here, we argue theoretically that low-rank adaptation, while it can store new facts in transformers, is unlikely to lead to compositional behavior when combining different LoRAs. We study this by considering the problem of composing knowledge from two LoRAs, where each contains factual knowledge, and their combination is expected to perform two-hop reasoning [e.g. Yang et al., 2024a, Balesni et al., 2025] that requires both pieces of knowledge. In general, direct theoretical understanding of multi-layer softmax transformers is very difficult; but many theoretical insights have been obtained by studying one-layer models and the limit of very wide models. We use this approach to perform a simple analysis of low-rank adaptation for factual knowledge. Our setup is inspired by a prior theoretical study of factual recall in transformers [Nichani et al., 2025] focusing on one-layer transformers. For simplicity, we focus on the special case of a single attention head, and do not assume noise tokens in the context. Nichani et al. [2025, Section 4] show that either an MLP or an attention head can perform factual recall. Adapting the MLP with LoRA on a one-hop prompt can change individual facts – such as setting $r _ { 1 } ( x _ { 1 } )$ to $x _ { 2 }$ . However, importantly, combining LoRAs adapting two relations will not result in compositional behavior, as we show below. # 3.1 Setup Entities, Facts, and Prompts We consider a set $\chi$ of entities, and a set $\mathcal { R }$ of binary relations $r \subset \mathcal { X } \times \mathcal { X }$ (e.g., X is married to Y; Y lives in $Z$ , etc). We assume that each $r$ is a partial function, i.e., for each $x$ , there is at most one $y$ satisfying $( x , y ) \in r$ ; we write $y = r ( x )$ . Whereas Nichani et al. [2025] relied on the assumption that each relation maps to a disjoint output space, we avoid this assumption. We assume that the model operates on the following prompts 1. One-Hop: X REL (where $\mathtt { X }$ represents an entity $x \in \mathcal { X }$ and REL represents a relation $r \in \mathcal { R }$ , with expected completion: Y, where $y = r ( x )$ (e.g., “the spouse of $\mathrm { \Delta } \mathrm { X }$ is ${ \mathrm { Y } } ^ { , , , }$ ). 2. Two-Hop: X REL1 REL2 (where REL1, REL2 represent relations $r _ { 1 } , r _ { 2 } \in \mathcal { R } ,$ ), with expected completion: Y, where $y = r _ { 2 } ( r _ { 1 } ( x ) )$ (e.g. “the place of birth of the spouse of $\mathrm { \Delta } \mathrm { X }$ is ${ \mathrm { Y } } ^ { , , }$ ). Simple Transformer Model We consider a vocabulary consisting of relations $r _ { 1 } , r _ { 2 } , \ldots$ and entities $x _ { 1 } , x _ { 2 } , \dotsc ;$ with token embeddings $e _ { r _ { i } } , e _ { x _ { i } } \in \mathbb { R } ^ { \dot { d } }$ . We will write $E \in \mathbb { R } ^ { | \mathcal { X } \cup \mathcal { R } | \times d }$ for the matrix holding all token embeddings. We assume a single softmax attention head with $K , Q , V \in$ $\mathbb { R } ^ { d \times d }$ matrices, and a ReLU MLP with hidden dimension $m$ given by matrices $U \in \mathbb { R } ^ { m \times d }$ ; $W \in$ $\mathbb { R } ^ { | \mathcal { X } | \times m }$ , mapping a vector $x$ to $W \cdot R e L U ( U x )$ . We do not require positional encodings. We assume that the next-token prediction is provided by $W$ as a one-hot encoding of the target entity, $i _ { x }$ , omitting softmax for simplicity. Our aim is to showcase limitations in composition, not in storage of knowledge itself; hence, we allow the model a width $d$ substantially larger than $| \mathcal { X } | | \mathcal { R } |$ . In order to give the MLP as much capacity as needed, we allow $m$ to be arbitrarily large. Nichani et al. [2025] took $E$ to be randomly initialized and not traineed. We follow this assumption, and additionally take $U , V$ to remain untrained, as we do not assume noise tokens in the context. Overall, we assume that U, V, E matrices are randomly initialized, all with entries from N (0, √1 ). We focus consideration of training to $W$ . This represents a random features setup [e.g. Rahimi and Recht, 2008, Ghosal et al., 2022, Dirksen et al., 2022]. In this setup, softmax attention is close to uniform; we will take it to be exactly uniform for simplicity. We will examine the situation where the base model already performs correctly for the given set of relations $\mathcal { R }$ , and $W$ is then adapted to reflect edits to such facts. We focus LoRA on $W$ , in agreement with our experimental finding in Section 4.1 that applying to MLPs can be sufficient to get most of the gains. We consider updates $\Delta W = A \bar { B ^ { T } }$ with $A \in \mathbb { R } ^ { | \mathcal { X } | \times s }$ , $B \in \mathbb { R } ^ { m \times s }$ where $s$ is small, subject to an L2 penalty $\| A \| _ { F } + \| B \| _ { F }$ . We particularly consider one-rank updates, $\Delta W = p q ^ { T }$ where $p \in \mathbb { R } ^ { | x | } , q \in \mathbb { R } ^ { m }$ . We note that this setup simplifies many aspects of transformers: there is only one layer and one head, and training focuses on the (linearized) output. We also remove the softmax over the vocabulary in the output. Our setup is designed to be simplest possible setup in which a nontrivial statement about LoRA’s ability to learn and combine abilities can be made. # 3.2 Results The correct responses to all 1-hop and 2-hop relations can jointly be coded into $W$ when $d$ and $m$ are sufficiently large, due to the separation ability of the random features model [Ghosal et al., 2022]. This analysis is in line with mechanistic studies of factual recall suggesting MLPs act as key-value storage [Geva et al., 2021]. Changing a fact $y = r ( x )$ requires changing the output of the MLP on the subspace spanned by the entity and relation. When the update affects only a single fact, L2 regularization ensures that it has a simple and interpretable closed form: Proposition 1. A rank-one update to $W$ changing the output on a prompt X REL from $r ( x )$ to $\tilde { r } ( x )$ must have the form: $$ \Delta W _ { r \tilde { r } } = \frac { 1 } { \| R e L U ( U V e _ { x } + U e _ { R E L } ) \| _ { 2 } ^ { 2 } } ( i _ { \tilde { r } ( x ) } - i _ { r ( x ) } ) R e L U ( U \cdot V e _ { x } + U \cdot e _ { R E L } ) ^ { T } $$ This is similar to the RoME update [Meng et al., 2022]. Intuitively, based on the idea that MLPs act as key-value storage, the LoRA update $\Delta \bar { W } = A B ^ { T }$ specifically addresses the encoding of the prompt $\mathtt { X }$ REL in the $B$ matrix, and the changed output in the $A$ matrix. The proof is in Appendix A.1. Now consider a two-hop prompt $\mathtt { X }$ REL1 REL2, intended to denote the composition of the two relations. Given sufficient width, any set of such two-hop facts can be encoded in $W$ . However, as we next show, adding two LoRAs modifying two relations $( \Delta W _ { r \mapsto \tilde { r } } , \Delta W _ { r \mapsto \hat { r } } )$ will not unlock compositional behavior on the new facts: Theorem 2. Assume LoRAs $\Delta W _ { r _ { 1 } \mapsto \tilde { r } _ { 1 } }$ , $\Delta W _ { r _ { 2 } \mapsto \hat { r } _ { 2 } }$ are created to adapt two single facts for $r _ { 1 } , r _ { 2 }$ Summing these adapters will not result in correct results for composition of the two relations $r _ { 1 } , r _ { 2 }$ . The formal proof is in Appendix A.1. The reasoning is as follows. As shown in Proposition 1, the two LoRAs specifically modify the MLP output on the subspaces inhabited by the activations computed on the two one-hop prompts. When the model encounters a two-hop prompt, the activations will partly overlap with the subspaces for both one-hop prompts, and the adapters will lead the model to output not the composition $r _ { 2 } ( r _ { 1 } ( x ) )$ , but a linear mixture of two relevant entities. A natural question is whether some of the routing or weighting methods proposed in the literature resolve this; it turns out that the argument extends to those: For instance, weighted averaging of the two adapters [e.g. Prabhakar et al., 2024, Ostapenko et al., 2024] will still fail to perform compositionally when several facts are updated (see Appendix A.1.1 for more). Yet another approach might be to combine a larger library of LoRAs where some have been trained on 2-hop examples from other task pairs. One might hope that this would prime the model towards compositional behavior; however, the reasoning above still applies, and suggests that reusing LoRAs would still fail to behave compositionally (Appendix A.1.1). One limitation of our theoretical analysis is that (in line with Nichani et al. [2025]) it applies to a single-layer transformer; our experiments test applicability of the conclusions to LLMs across scales. # 4 Experiments Our experiments aim to test under what circumstances combining LoRAs can enable LLMs to perform new tasks that require logical combinations of different LoRA’s expertise. Our primary focus is on the two data-agnostic routing methods, Uniform averaging and Arrow routing [Ostapenko et al., 2024], which directly work on shared LoRA experts’ weights (see Appendix A.2.2 for details). We synthesized two reasoning tasks, 2-hop reasoning and easy-to-hard math word problems, to examine the successful factors underlying their zero-shot generalization via reusing existing LoRAs on novel tasks. Specifically, we investigate the preconditions necessary for LoRA routing to be effective, such as entity familiarity, domain-specific pretraining of base models, and the necessity of the presence of novel tasks in fine-tuning LoRA experts. We assess how these effects on different routing strategies would scale with base model sizes, ranging from 3 billion to 70 billion parameters. Our findings emphasize the importance of domain-specific pretraining, common templates for LoRA fine-tuning datasets, and potential interference from routing compared to individual LoRA experts. # 4.1 Two-Hop generalization We investigate whether combining two LoRAs enables compositional reasoning. Building on our theoretical analysis and inspired by Balesni et al. [2025], we design a two-hop reasoning task requiring composition across linguistic variations while controlling for base model knowledge. The dataset uses a fixed structure: First Hop $[ A B $ , identifying the spouse of a given entity $A$ ), Second Hop $| B \to C$ , identifying the residence of $B$ ), with the goal of inferring $A C$ . This setup closely follows our Theorem 2, which suggests that LoRAs trained on one of the two hops each would, if combined, not unlock the indirect relationship. We conduct three datasets varying the nature of entity names and locations while ensuring that the relational facts remain synthetic: $F$ (fake names, fake locations), where both entities and locations are synthetic (e.g., $( Z i n t , F r o s k , N a r i k )$ ); $H$ (fake names, real locations), where names are synthetic but locations are real (e.g., $( Z i n t , F r o s k , L o n d o n )$ ); and $R$ (real names, real locations), where both names and locations are real, but relationships are deliberately shuffled to remain false (e.g., (Barack Obama, Camila Alves, London)). We refer to the first-hop $( A B )$ ), second-hop $\dot { \boldsymbol { B } } \boldsymbol { C } )$ ), and the two-hop ( ${ \bf \nabla } \cdot { \bf A } C$ ) subsets of each dataset as $F _ { 1 } , F _ { 2 } , F _ { 1 2 } , H _ { 1 } , H _ { 2 } , H _ { 1 2 }$ , and $R _ { 1 } , R _ { 2 } , R _ { 1 2 }$ , respectively (see Table 4 and Appendix A.2.1 for examples and details). Based on ablation studies (Table 7 and 8 in Appendix A.2.2), we focused on fine-tuning only the MLP layers of the following base models: Qwen2.5-3B-Instruct, Qwen2.5-7B-Instruct, and Qwen2.5-14B-Instruct Qwen-team [2025], as well as DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1- Distill-Qwen-14B, and DeepSeek-R1-Distill-LLaMA-70B DeepSeek-AI-team [2025]. # 4.1.1 Impact of base model and familiarity Experiment setup For each dataset $( F , H , R )$ , we train four LoRA adapters (experts) LoRA 1, LoRA 2, LoRA 3 (Oracle Expert), and LoRA 4 (Mixed Two-Hop Expert) on relation $A B$ , $B C$ , $A C$ , and mixed data ( $A B$ , $B C$ ), respectively. From these experts, we construct two libraries: 2-combination library which includes LoRA 1 and LoRA 2; and 3-combination library which includes LoRA 1, LoRA 2, and LoRA 3. We evaluate the model’s ability to generalize and infer $A C$ relationships. We use Chain-of-Thought (CoT) prompting during testing for the 3-combination library and the 2-combination library. Results and analysis As shown in Figure 1a, performance on the 3-combination library for the $H$ dataset improves with model size, while the 2-combination library remains consistently poor. Figure 1b shows that $R$ outperforms $H$ , both far exceeding $F$ . Notably, with only $A B$ and $B C$ adapters (2-combination library), accuracy stays below $10 \%$ , supporting Theorem 2 that composing knowledge across separate LoRAs is inherently challenging. Even when $A C$ is covered (3-combination library), routing does not always succeed, in particular in smaller models and the presence of of unfamiliar entities, such as fake names or cities in the $F$ dataset. These trends hold across datasets and model families (see Table 6 in Appendix A.2.2). Figure 1: Performance of LoRA libraries and individual experts on two-hop datasets. (a) Comparison of library-level and expert-level performance on the test set of $H$ across different base models. (b) Impact of entity familiarity on the performance of LoRA combination methods, using Qwen2.5- 14B-Instruct as the base model, evaluated on three test sets: $F , H .$ , and $R$ . Across setups and model sizes, performance on the 2-combination library is poor; including an expert trained on the target task $A C$ relationship) is necessary. # 4.1.2 Composition requires close match between testing and training prompts So far, we found that synthesizing two-hop reasoning from two LoRAs is difficult, in agreement with our theoretical predictions. What strategies could enable composition? While a substantial amount of work has found chain-of-thoughts to support reasoning (including in the 2-hop setup when employing full-parameter finetuning, [Balesni et al., 2025]), we found that composition from the two $A B$ and $B C$ experts performs poorly even with chain-of-thought prompting. Our theoretical analysis suggests that, as LoRA adapters rely on targeting specific low-dimensional subspaces, compositional behavior can only be unlocked when the target prompts show a close formal match to prompts on which the LoRAs were trained. CoT prompting might thus be insufficient as the form of the targets mismatches the one-hop training examples of the LoRAs. However, mixing in CoT examples into the LoRA training data might be sufficient. In this section, we test if it is possible to enable composition by including CoT templates in the training data, and how close the match in reasoning patterns needs to be between finetuning and testing datasets. We specifically define the following bridge technique, and designed a series of experiments to determine how closely to target task needs to be present in the LoRA finetuning datasets to enable compositional behavior. The idea is that the finetuning dataset additionally includes examples of the targeted reasoning pattern. We test, via ablations, which aspects of the targeted reasoning pattern needs to be present. Experimental setup We design two bridge variants over the $F$ and $R$ datasets. The Fake Bridge $( B _ { F } )$ is constructed by concatenating the $F _ { 1 }$ $( A B )$ ), $F _ { 2 }$ $\cdot B C$ ), and $F _ { 1 2 }$ CoT (CoT-formatted $A C$ ) subsets. The Real Bridge $( B _ { R } )$ follows the same structure but uses the corresponding subsets $( R _ { 1 } , R _ { 2 } , R _ { 1 2 } { \mathrm { C o T } } )$ from the $R$ dataset, which contains real names and locations (see Table 5 and Figure 3 in the Appendix A.2.1 for examples and details). We fine-tune adapters on the union of direct-hop examples and a bridge dataset. In the first configuration, Setup $^ { \small 1 }$ , models are trained by mixing fake data subsets $( F _ { i } )$ with the Real Bridge set $( B _ { R } )$ : LoRA 1 is trained on $F _ { 1 } + B _ { R }$ , and LoRA 2 on $F _ { 2 } + B _ { R }$ , with evaluation on the held-out subset $F _ { 1 2 }$ . In the second configuration, Setup 2, models are trained by mixing real data subsets $( R _ { i } )$ with the Fake Bridge set $( B _ { F } )$ : LoRA 1 is trained on $R _ { 1 } + B _ { F }$ , and LoRA 2 on $R _ { 2 } + B _ { F }$ , with evaluation on the held-out subset $R _ { 1 2 }$ . Results and analysis Figure 2 demonstrates that explicitly incorporating the target two-hop reasoning pattern into the LoRA fine-tuning data is crucial for achieving reliable compositional generalization. In Setups $^ { \small 1 }$ and 2, where each adapter is trained on a synthetic direct-hop subset combined with the bridge dataset, Arrow performance improves significantly compared to earlier experiments. Additionally, the bridge setup is much more successful in Setup 2, highlighting the importance of entity familiarity for effective generalization. We conducted a set of ablations to analyze which aspects are important for the success of this strategy. Incorporating structured reasoning into LoRA finetuning yields only marginal gains unless the finetuning data closely mirror the target two-hop task. First, as shown in Table 1, the bridge improves two-hop accuracy only when CoT-formatted $A C$ instances are included during adapter training. Omitting CoT formatting results in worse performance (Setups 2 vs. 3). Second, simply including the bridge in only one of the two LoRA adapters (Setups 4, 5), or providing only $A C$ prompts without the individual one-hop tasks (Setups 6, 7), results in significantly weaker compositional performance compared to setups where both $F _ { 1 }$ ( $( A B )$ and $F _ { 2 }$ $( B C )$ ) are included alongside CoT bridging. This highlights the importance of exposing the model to each subtask. Third, we found that relaxing the bridge to use disjoint task pairs still produces nontrivial gains (Setup 8, which uses completely different relations as the bridge dataset: $F _ { 4 }$ : study_in and $F _ { 5 }$ : child_of, see Table 4 in the Appendix A.2.1 for examples), suggesting that exact task-pair matching is less critical so long as the finetuning set contains examples reflecting the overall reasoning pattern. Altogether, these results confirm the importance of CoT exemplars and the individual tasks to unlocking generalization of Arrow routing, even if the examplars are semantically different from the target task. Figure 2: (left) In the bridge setup, both LoRA experts are trained not only on one of the two hops, but also on examples (disjoint from those needed in testing) of both hops, and chain-of-thought two-hop reasoning. (right) Performance comparison of two setups across different base models: Setup 1 (Real Bridge, $\boldsymbol { B } _ { R }$ ) adds a bridge using real names and locations to a dataset using fake names and fake locations $( F )$ , while Setup 2 (Fake Bridge, $B _ { F }$ ) reverses these. Each setup uses LoRA 1 and LoRA 2 as the experts in the library. The bridge setup is much more successful in Setup 2. Aside from combinations of two or three LoRAs, we further tested what happens when increasing the number of tasks present in the collection of LoRAs, including both various one-hop tasks, and also various two-hop tasks. Even in this case, we found that composition was very difficult, again in agreement with our theoretical predictions (Analysis in Appendix A.3). Overall, these results support our conclusion that (i) direct composition of knowledge from different LoRAs is very difficult, (ii) the LoRA finetuning datasets must contain examples closely matching the target reasoning behavior. Table 1: Ablations for the bridge training setups. We compare the 2-combination libraries trained on just the two hops (0), the full bridge (2), with various strategies interpolating between these, such as providing a bridge in only one expert (4 and 5), or omitting the CoT template from the bridge (3). The full bridge attains highest performance, and versions not including a bridge CoT in both of the two experts show poor performance (0, 3, 4, 5). # 4.2 Generalization from Easy to Hard Math Word Problems To evaluate whether our findings hold in more realistic settings, we use the GSM-Symbolic benchmark [Mirzadeh et al., 2024], which enables controlled assessment of reasoning robustness in math across well-defined difficulty levels. Each LoRA expert was fine-tuned on GSM-Symbolic (original) and GSM-P1 (with one added clause) individually, before being combined for evaluation on GSM-P2 (which adds another clause). We compare general-purpose and math-specialized models to assess the impact of pretraining. Similar to exposing LoRAs to solutions closely resembling the target task, we also tested whether fine-tuning with reusable Markdown and Python code [Suzgun et al., 2025] would improve generalization on GSM-P2. Detailed experimental design, fine-tuning, and evaluation procedures can be found in Appendix Section A.4.1. Limitations of LoRA Routing for Compositional Generalization. The effectiveness of LoRA routing is highly dependent on the base model’s pretraining history. To start, we replicated the findings of Mirzadeh et al. [2024], which show that large language models (LLMs) lack robustness in mathematical reasoning (see Appendix Section A.4.2, Table 12). Routing methods such as Uniform and Arrow provided modest improvements for the general-purpose Qwen2.5-1.5B-Instruction model, but often degraded performance in math-specialized models like Qwen2.5-Math-Instruction, regardless of model size (Table 2). Among these, Uniform consistently outperformed Arrow. Echoing prior work showing that 8-shot GSM8K in-context examples do not improve performance on GSM-P2 [Mirzadeh et al., 2024], we further observed that combining these examples with LoRA routing actually worsened results. For example, in the Qwen2.5-Math-7B-Instruction model, Arrow routing with in-context examples reduced GSM-P2 accuracy from 0.27 to 0.06 (see Appendix Section A.4.2, Table 13 for details). The performance drop observed after LoRA routing may stem from a mismatch between the finetuning data and the base model’s capabilities. Qwen2.5-Math-Instruction is designed to solve problems using Markdown and Python code, while the GSM-Symbolic benchmarks provide only natural language Chain-of-Thought (CoT) solutions. As a result, routing LoRAs fine-tuned on this dataset may suppress the model’s tool-integrated reasoning abilities and lead to an increase in calculation errors. Our error analysis follows the definitions and procedures outlined by Zhong et al. [2025]. See Appendix Section A.4.2 and Table 14 for details. Table 2: Accuracy comparison on zero-shot GSM-P2 after routing LoRA experts individually finetuned on GSM-Symbolic and GSM-P1. How can programming language bridge the generalization gap? Our experimental design is motivated by recent findings, Dynamic Cheatsheet, which demonstrate that encouraging language models to retain and apply reusable intermediate solutions during inference significantly improves their performance on math problems Suzgun et al. [2025]. We extend this idea using the GSMSymbolic benchmark [Mirzadeh et al., 2024], where generalization from easier to harder problem variants requires understanding the full computational graph (Appendix Figure 5). In the previous setting, each LoRA is fine-tuned on partial solutions corresponding to subsets of reasoning steps (e.g., the black or orange subgraphs in Appendix Figure 5). However, routing these LoRAs alone does not suffice to solve the more complex P2 variant, which involves the complete computational graph (blue subgraph in Appendix Figure 5). We hypothesize that reusable Markdown and Python solutions can bridge partial representations and enhance compositional generalization through LoRA routing, and to test this, we implemented two agent-based actor-critic workflows [Wu et al., 2024] to generate fine-tuning data (See Appendix A.4.1 for implementation details). Table 3 demonstrate modest improvements in solving the complex P2 problems via routing LoRAs fine-tuned with these reusable code solutions. Such improvement is clearer in smaller model (Qwen2.5-Math-1.5B-Instruction) when fine-tuning targeted the MLP layers. This finding emphasizes the need for system designers to understand how to effectively reuse LoRA experts to guide data generation, while also noting that reusing LoRAs is most effective when target tasks are clearly defined beforehand. Table 3: Enhancing easy-to-hard generalization by leveraging Tool-Integrated Reasoning (TIR) prompt and fine-tuning with reusable code. # 5 Alternative Views While our findings indicate that combining LoRAs is ineffective for new tasks unless those tasks are already represented in the fine-tuning datasets, alternative PEFT methods may offer better compositional results. For instance, LoRI [Zhang et al., 2025] addresses cross-task inference by combining random projections with task-specific masks, potentially enabling better adapter routing. However, positive results for compositional reasoning have not been reported, and our theoretical analysis suggests that it remains challenging. Similarly, LoRA Lego [Zhao et al., 2024a] formalizes low-rank updates as composed of independent units and clusters these into new adapters to reduce interference, though it has not been shown to enable compositional reasoning. Self-MoE [Kang et al., 2024] constructs experts based on self-generated specialization training data and a trained router, but it remains underexplored to what extent this method can enable compositional combination of different abilities. FLiX [Sun et al., 2024] learns different low-rank updates for various task or dataset features, and CS-ReFT [Sun et al., 2024] learns orthonormal task-specific adaptors. Despite these innovations, none have demonstrated effective compositional combination of skills, as our theoretical analysis suggests inherent limitations. Another perspective is to train models specifically for generalization and composition, even if it requires data from the target task. Recent work [Prabhakar et al., 2024] has proposed LoRA concatenation as an effective method for composing skills to solve challenging math word problems, such as those in GSM-Hard [Gao et al., 2023]. We recognize the significance of these findings, particularly their demonstration that decomposing skills into reusable LoRAs and estimating appropriate combination weights can enhance performance, provided that additional task-specific data and knowledge are available. However, our work takes a different perspective. Unlike GSM-Hard [Gao et al., 2023], which primarily modifies numerical ranges while preserving the question format of the original GSM8K problems, GSMSymbolic-P2 [Mirzadeh et al., 2024] presents more realistic and difficult compositional generalization challenges. It altered the question format and the structural complexity of math problems into an entirely unseen problem forms. Our theoretical analysis shows the limit (Appendix A.1.1) that is supported by empirical results that training showed little gains in a 2-hop reasoning setting (Appendix Table 10). This suggests that the benefits of such approaches may not extend to more challenging generalization tasks like GSM-Symbolic. While skill composition remains important, our results highlight a key limitation of LoRA routing approaches as shown in our theoretical analyses and empirical findings: their effectiveness often depends on foreknowledge or training data of the downstream tasks, which may not be viable in practice.
Merging or routing low-rank adapters (LoRAs) has emerged as a popular solution for enhancing large language models, particularly when data access is restricted by regulatory or domain-specific constraints. This position paper argues that the research community should shift its focus from developing new merging or routing algorithms to understanding the conditions under which reusing LoRAs is truly effective. Through theoretical analysis and synthetic two-hop reasoning and math word-problem tasks, we examine whether reusing LoRAs enables genuine compositional generalization or merely reflects shallow pattern matching. Evaluating two data-agnostic methods--parameter averaging and dynamic adapter selection--we found that reusing LoRAs often fails to logically integrate knowledge across disjoint fine-tuning datasets, especially when such knowledge is underrepresented during pretraining. Our empirical results, supported by theoretical insights into LoRA's limited expressiveness, highlight the preconditions and constraints of reusing them for unseen tasks and cast doubt on its feasibility as a truly data-free approach. We advocate for pausing the pursuit of novel methods for recycling LoRAs and emphasize the need for rigorous mechanisms to guide future academic research in adapter-based model merging and practical system designs for practitioners.
[ "cs.CL", "cs.AI" ]
# 1 Introduction Modern database management systems (DBMS) are universal solutions for management of large amount of information, nonetheless, the functionality provided by DBMS in some cases is excessive, resulting in technical requirements that exceed the capabilities of existing hardware platforms. But with the growth of technological level, problems appear, so storage and computation in enviroments where fullfledged personal computers and large disk space cannot be used, are required to solve them. For instance, applications such as satellites and autonomous rovers. Existing DBMS include: 1. LittleDB [2] - An SQL database developed by Pouria Moosavi, that is suitable for embedded systems (basic automation systems). It uses a relational model. 2. LMDB [3] - A DBMS developed by Symas company, deployable on basic automation controllers. It is based on the key-value architecture. 3. Berkeley DB [4] - A DBMS developed by Oracle Corporation, that uses key-value structure. Like other lightweight DBMS, it can be deployed on basic automation controllers with integrated standard libraries. One of the major disadvantages of the above-mentioned solutions is the lack of a mechanism of data protection from single-event upsets (SEUs), While this issue may be partially addressed through the use of radiation-resistant memory with ECC mechanisms [5], nonetheless, it does not resolve the major problem, namely the lack of noise immunity of data stored in read-only memory (ROM). Hence, the development of small-sized modular DBMS, that allows managing data effectively in difficult conditions and is able to operate under constraints such as limited disk space and processing power, is relevant. Such DBMS can be implemented for basic automation in systems using low-cost microcontrollers (e.g., STM32F103C6T8 [6]) or higher-cost, noise-immune microcontrollers like the 1874BE7T [7] and 1887BE6T [8]. # 2 Features of the developed architecture Since similar DBMS require an extensive amount of random access memory (RAM), in the development of modular lightweight DBMS it was not feasible to use the established architectural solutions [9], resulting in development of memory manager, the principle of which is managing of static fixed-size memory allocation via blocks, based on the mechanism of single-linked list. The memory manager and standard libraries for interacting with it are integrated into the DBMS kernel, which comprises: 1. Input tokenizer - Handles command splitting into separate directives. 2. Syntax parser - Handles directive execution with tokenized parameters. 3. Table abstraction layer, which implements management of directory layer with support of parallel computations and of creation of data processing threads. 4. Directory abstraction layer, which implements management of page layer with support of parallel computations and of creation of data processing threads. 5. Page layer - The deepest abstraction layer. Pages in the DBMS kernel function similarly to paging in the Linux kernel, storing data and error-correcting bits and processing them via Hamming code for bit error correction [10]. In contrast to DBMS solutions that store data in one file, there is a data fragmentation system in the developed software, which allows storing data in large amounts of fixed-size blocks, which are distributed over directories of the file system. The implementation of this approach has made it possible to achieve following properties: 1. The increase of resistance to single-event upsets (SEUs) of the entire system due to physical data decompression. 2. Delegation of the tasks of finding pages and directories to the file system. 3. Optimization of algorithms for deleting and adding data without requiring additional memory. The last offset for insertion is saved in the developed software, which allows to avoid iteration through allocated memory while searching for free space. This design optimizes the process of adding pages, directories, and tables. At the same time, the deletion operation shifts this offset to the position of the last data erasure, allowing to avoid excessive memory usage. Moreover, in order to minimize disk space accesses, which are computationally expensive in terms of performance and platform resources, a system of global cache table was integrated into the kernel, the main purpose of which is storing tables, directories and pages in random access memory (RAM) (utilizing a data eviction algorithm that factors in their relevance, lock state, and memory cleanup requirements). When the system attempts to insert a new entry, the table algorithm first attempts to write it into available free space. If this fails, it initiates an attempt to rewrite older entries based on their lock status. If insertion is still impossible, the entry is temporarily held in a dedicated memory buffer, which is freed immediately after the entry is no longer needed. # 3 DBMS Performance Analysis and Testing To assess the quality of the developed software, a set of metrics were defined: For the memory footprint metric, executable files of the same project compiled in different systems were analyzed. The results are summarized in Table 1. Table 1: Executable size on different platforms The dynamic size represents the total project size, whereas the static size reflects the executable size on embedded systems. Thus, the minimum requirement for deployment is at least $1 2 0 \mathrm { K B }$ of flash memory. For comparison, Table 2 lists the sizes of lightweight executable versions of various DBMS. Table 2: Executable size of different DBMS on GNU Linux Ubuntu To assess the performance of the developed DBMS, a test setup was deployed on a Raspberry Pi 3B platform equipped with a BCM2835 processor, 909 MB of RAM, and a 32 GB memory card. The test data comprised a 36-character string containing a record index, a random string variable, and a random integer variable. The results are presented in Table 3. Table 3: DBMS average work time For comparison with these results, the results of analogous desktop DBMS are presented in Table 4. Table 4: Insert speed for $5 \mathrm { m i l }$ . records # 4 Summary Thus, the developed modular DBMS demonstrates high performance under constrained computational resources. Due to its multi-level architecture with parallel access support, mechanism of caching, and integration of bit error correction algorithm, the system ensures resilience to single-event upsets (SEUs) and minimizes disk space accesses. The software solution significantly enhances the efficiency of autonomous rovers, embedded systems, and simila resource-constrained devices. # References [1] Farzad Tavakkoli, Azam Andalib, Asadollah Shahbahrami, and Reza Ebrahimi Atani. A Comparison of Lightweight Databases in Mobile Systems. Journal of Computing, 3(7), July 2011. ISSN: 2151-9617. [2] Pouria Moosavi. LittleDB. Available at: https://github.com/pouriamoosavi/LittleDB. Accessed: 2025- 05-13. [3] Martin Hedenfalk. LMDB Documentation. Available at: http://www.lmdb.tech/doc/. Accessed: 2025-05-13. [4] Oracle Corporation. Berkeley DB. Available at: https://www.oracle.com/database/technologies/ related/berkeleydb.html. Accessed: 2025-05-13. [5] Mukku Pavan Kumar and Rohit Lorenzo. A Robust Radiation Resistant SRAM Cell for Space and Military Applications. Integration, Volume 96, May 2024, Page 102155. [6] STMicroelectronics. STM32F103C6T8 Microcontroller. Available at: https://www.st.com/en/ microcontrollers-microprocessors/stm32f103c6.html. Accessed: 2025-05-13. [7] JSC NIIET. 1874VE7T Microcontroller. Available at: https://niiet.ru/product/1874%D0%B2%D0%B57% D1%82-2/. Accessed: 2025-05-13. [8] JSC NIIET. 1887VE6T Microcontroller. Available at: https://niiet.ru/product/1887%D0%B2%D0%B56% D1%82/. Accessed: 2025-05-13. [9] Aleksei Sergeevich Tortika and Aleksei Sergeevich Ershov. Review and Comparative Analysis of Modern Database Management Systems. Bulletin of the Saratov State Technical University, November 25, 2020. [10] Yuanyuan Cui, Mian Lou, Jianqing Xiao, Xunying Zhang, Senmao Shi, and Pengwei Lu. Research and Implementation of SEC-DED Hamming Code Algorithm. In 2013 IEEE International Conference, 2013. [11] GOST 28195-89: Software Quality Evaluation. Russian National Standard (in Russian). [12] VMware. Greenplum. Available at: https://greenplum.org. Accessed: 2025-05-13. [13] The PostgreSQL Global Development Group. PostgreSQL. Available at: https://www.postgresql.org/. Accessed: 2025-05-13. [14] Oracle Corporation. Oracle Database Express Edition (XE). Available at: https://www.oracle.com/uk/ database/technologies/oracle-database-software-downloads.html. Accessed: 2025-05-13.
The article addresses the problem of storing data in extreme environmental conditions with limited computing resources and memory. There is a requirement to create portable, fault-tolerant, modular database management systems (DBMS) that are optimized for use in embedded systems. Existing databases, such as LittleDB, LMDB, and Berkeley DB, are reviewed, and their limitations are identified. A variant of a portable DBMS is introduced to efficiently manage data in environments where computational resource usage must be minimized, while meeting specific requirements for fault tolerance and noise immunity. Common solutions for optimizing of insertion, storage and management of data are reviewed. Algorithms for fault-tolerant data encoding in RAM are implemented. An architectural solution to data storage and minimizing the impact of bit errors is proposed. Software that manages relational data in extreme conditions is developed, that allows testing and comparing results with existing solutions.
[ "cs.DB", "68P15", "H.2.4" ]
# 1 INTRODUCTION Analyzing human mobility and geolocation data offers a wealth of applications that touch nearly every aspect of modern life [22, 23]. To truly understand human mobility, it is essential to uncover the underlying relationships between places and the activities they attract, as these connections shape when, where, and why people move [35]. This scenario, also illustrated in Figure 1, reflects a common experience: While exploring a new city, someone who loves cats might visit nearby cafés and restaurants but miss out on a unique experience, such as a local cat café, simply because they weren’t aware it existed. This highlights the potential of intelligent systems capable of inferring personal interests and mobility patterns to provide relevant, personalized recommendations and help people discover places that matter to them without actively searching. This experience showcases a broader need: Models that aim to understand spatial context and connect it to individual behavior and intent. Places vs. Points: In the analysis of human mobility and geolocation data, Points of Interest (POIs) serve as the finest-level spatial units, offering detailed insights into the structure and function of the built environment [24, 26]. However, POIs such as cafés, museums, or parks do not, by themselves, capture the full meaning of a place. Foundational theories in social sciences and geography distinguish between points, which represent abstract coordinates or labeled locations, and places, which carry social or personal significance and may not conform to predefined geographic boundaries such as POIs, cities, or states [3, 12]. A place, from a human perspective, can be fluid, shaped by personal experience, routine, or cultural meaning. For instance, someone might consider "My weekend walking route" as a meaningful place, even though it spans multiple parks, streets, and cafés, none of which, individually, capture the essence of that place. Similarly, a community might view a set of adjacent businesses and gathering spots as a single neighborhood hub, despite those POIs being labeled separately in datasets. These examples illustrate that places are often emergent, defined through human behavior and connection, rather than strictly bounded by spatial labels. Although some studies [19] focus on learning general-purpose representations of locations, sometimes treating POIs as the finestgrained spatial units, such approaches fall short of capturing the lived experiences and contextual meanings tied to these locations. Therefore, understanding mobility patterns requires moving beyond learning static features of geographic units to modeling how individuals perceive and engage with these locations as meaningful places. This vision paper takes a step in that direction by proposing a human mobility-driven spatiotemporal foundation model via understanding places through the dynamics of how people move and interact with their environments, ultimately enabling downstream applications as personalized as guiding cat lovers to places where they can find cat cafés and other places they may like. # 2 RELATED WORKS Research shows that understanding the features of geolocation data can be a more powerful indicator of long-term health outcomes than genetic factors [13]. This recognition has driven efforts to collect large-scale data capturing the interplay between human activity and the surrounding environment [22, 23]. To effectively harness this wealth of data, robust methods for understanding geographic entities have been developed. By analyzing how people move through space and time, we can transform raw location data into actionable insights, uncovering meaningful mobility patterns that drive a wide range of advanced applications [22]. Machine learning has been applied to a wide range of data sources in various modalities for geospatial modeling [8]: For instance, web search data is used to predict influenza trends [10]. Search queries are leveraged to model global economic indicators [5]. Also, satellite imagery data is utilized to estimate factors like forest cover and housing prices [27]. From a task-driven standpoint, these approaches frequently employ statistical and deep learning models, such as CNNs and RNNs, to extract fine-grained spatiotemporal patterns [8], and the majority concentrate on particular domains like internet data, satellite imagery, or map-based datasets. Also, a range of prior approaches have also aimed to develop general-purpose geographic encoders [17, 30]. To address the limitations of task-specific approaches and the reliance on manually crafted features to encode geolocation data and human mobility, there is a growing need for foundation models that can understand geolocation data at varying levels of granularity while simultaneously capturing human mobility patterns [19]. Foundation models have already become a dominant paradigm in domains such as computer vision and natural language processing, where models like CLIP [1] and GPT-4 [1] demonstrate strong transferability across tasks and data distributions. For spatiotemporal reasoning, recent work has focused on developing task-adaptive and region-agnostic foundation models for either (i) trajectory prediction [6, 20, 32, 36] or (ii) geolocation representation learning [2, 4, 18, 29, 33, 34]. # 2.1 Foundation Models for Trajectory Prediction Foundation models for trajectory prediction (TP) are designed to capture general sequential patterns from trajectory data [6, 19, 20, 32, 36]. For example, PMT[32] introduces a transformer-based foundation model for human mobility prediction, representing trajectories as sequences of Census Block Groups(CBG). Trained autoregressively, the model captures spatiotemporal patterns to support trajectory prediction tasks. Similarly, UniTraj[36] proposes an encoder-decoder pretraining approach to obtain a POI sequence encoder that can be fine-tuned for trajectory-related downstream tasks. TrajFM [20] introduces a trajectory foundation model pretrained on vehicle trajectories from multiple cities, using trajectory masking and autoregressive recovery to enable both regional and task transferability. # 2.2 Foundation Models for Geolocation Representation Learning Another type of spatial foundation model focuses on geolocation representation learning (GRL), aiming to generate general-purpose embeddings for geographic entities [2, 4, 18, 19, 29, 33, 34]. Some studies have leveraged large language models(LLMs) and visionlanguage models(VLMs) to learn location embeddings. For example, GeoVectors [29], and SpaBERT [18] utilize open-source data such as OpenStreetMap , while G2PTL [31] is trained on large-scale logistics delivery data. The most closely related work, PDFM [2], leverages a pretraining stage to integrate diverse, globally accessible geospatial data, such as maps, activity levels, and aggregated search trends, alongside environmental indicators like weather and air quality. This approach involves building a heterogeneous geospatial graph, where counties and postal codes serve as nodes, and edges are defined based on spatial proximity. A graph neural network (GNN) is then used to learn meaningful embeddings for these nodes. Despite progress in geolocation foundation models, current methods still struggle to capture human mobility across multiple spatial scales and often fail to understand places, locations defined by human meaning and behavior. These models typically rely on fixed units like POIs or administrative boundaries, which do not reflect how people experience space. In Section 3, we delve into these limitations, which motivate the core objectives of this vision paper. Also, a general qualitative comparison of these existing foundation models is summarized in Table 1. # 3 LIMITATIONS & MOTIVATIONS # 1. Lack of Mutual Awareness in Mobility and Location Models A key limitation lies in the lack of explicit human mobility information integrated into geolocation data. While current foundation models [2, 4, 18, 29] synthesize rich representations of geographic entities, they fall short in capturing who visits these locations, how they get there, and when these movements occur. This omission restricts the model’s ability to fully understand and represent dynamic human behavior. Without incorporating mobility patterns, such as inflow and outflow volumes, visit frequency, and temporal visit distributions, the learned representations remain static and disconnected from real-world usage. On the other hand, existing foundation models for trajectory prediction [6, 20, 32, 36] do not integrate the rich information from geolocation data into sequence training, resulting in the loss of location semantics across different levels of granularity. This gap highlights the need for models that go beyond static spatial features by integrating both geolocation-level data and mobility patterns. By combining where places are with how people move through them, we can begin to truly understand the complex, lived experience of places. From Points to Places: Towards Human Mobility-Driven Spatiotemporal Foundation Models via Understanding Places Table 1: General qualitative comparison of existing spatial foundation models. “Understand Places”: the ability to reason about places, “Mobility Utilization”: the ability to utilize human mobility information, “Temporal Utilization“: the reliance on temporal information, “Capability”: the goal of foundation model which is either TP as for Trajcetory Prediction, or GRL as for Geographic Representation Learning, “Granularity”: level(s) of granularity that the model can infer, “Pretraining”: the pretraining framework # 2. Lack of Temporal Dynamics in Modeling Mobility Another limitation of existing foundation models for understanding geolocations, e.g. PDFM [2], is their handling of temporal information. The input data sources mostly exhibit misaligned temporal granularity, which can affect the model’s consistency. Moreover, PDFM generates static geolocation embeddings, failing to capture the dynamic and time-evolving nature of human mobility. Incorporating temporal alignment and explicitly modeling temporal dynamics could significantly improve its effectiveness in real-world mobility scenarios. # 3. Scalability Issues Another key limitation concerns the scalability of pretraining foundation models, which typically rely on massive datasets. For instance, UniTraj [36] is pretrained on 2.45 million trajectories using an encoder-decoder architecture. Similarly, a transformerbased model has been trained on nearly 42 million sequences of location-based service (LBS) data to support tasks like next-location prediction and mask imputation [32]. While these models demonstrate strong performance, such large-scale training paradigms demand significant computational resources, posing barriers to accessibility, reproducibility, and deployment in resource-constrained environments. # 4. Limitation in Granularity Flexibility Another limitation of current foundation models is their design for single-granularity inference. For instance, PMT [32] represents trajectories as sequences of CBGs, and all downstream tasks, such as next location prediction, are performed at this level. Similarly, PDFM [2] learns general-purpose embeddings for U.S. ZIP codes and counties, limiting inference to those granularities and lacking the ability to operate at finer levels such as POIs or CBGs. However, a place might be interpreted by multiple geographic entities at different granularity levels. This raises a critical question: Can we develop a foundation model that integrates information across multiple spatial scales and supports inference at any desired granularity? # 4 RESEARCH DIRECTIONS A human mobility-driven spatial foundation model must capture rich, multi-dimensional context to support diverse applications. In this section, we highlight essential contextual signals rather than prescribing specific training methods. # 4.1 Towards Understanding Places To reason about human mobility in a meaningful way, it is essential to move beyond point-based representations and adopt a structured notion of places. In this context, we define a place as a semantically meaningful spatial unit that may correspond to or span multiple geographic entities, such as POIs, postcodes, neighborhoods, or administrative regions. Definition 4.1 (Place). A place $P$ is defined as a non-empty set of spatial entities: $$ P = \{ e _ { 1 } , e _ { 2 } , \ldots , e _ { n } \} , \quad { \mathrm { w h e r e ~ } } e _ { i } \in { \mathcal { E } } $$ and $\varepsilon = \mathcal { G } \cup \mathcal { P }$ denotes the universe of geographic entities $\mathcal { G }$ (e.g., POIs, postcodes, cities) and existing places $\mathcal { P }$ . This definition allows a place to be hierarchically composed of both primitive spatial entities and other places, enabling recursive and multi-scale representations. This formalism allows flexibility in representing places across scales and contexts. For instance, a place might be as specific as a single cat café or as broad as a neighborhood known for pet-friendly venues. Challenges: Learning to represent places introduces several challenges. First, the data associated with many geographic entities is often sparse or incomplete. For example, detailed mobility traces or POI metadata may be missing for underrepresented regions. Second, the notion of place is inherently subjective and dynamic, evolving with user preferences, temporal context, and social factors. # 4.2 Spatiotemporal Representations for Human Mobility Understanding An important step toward understanding human mobility is the ability to represent the built environment in a way that reflects how people interact with it. Rather than viewing places in isolation, future research must consider how their roles, proximities, and interactions shape movement behavior. This includes modeling how certain places attract recurring visits, how regions are organized hierarchically, and how connectivity patterns vary across urban and rural landscapes, all of which can be effectively captured using heterogeneous graph representations. By capturing such spatial structures, researchers can better ground human mobility patterns in the environments that produce them, enabling models that are both generalizable and context-aware. Challenges: While some approaches may rely solely on geographic attributes such as latitude, longitude, and spatial distance, richer representations often require incorporating semantic information like POI categories, functional roles, or contextual metadata. Determining which features to include and how to encode them remains an open research question that significantly impacts the expressiveness and utility of the resulting spatial representations. # 4.3 Scalable and Multi-Granular Representation Learning As geolocation data becomes increasingly fine-grained, particularly at the POI level, new challenges emerge around scalability and granularity. Once rich POI datasets are constructed by integrating data sources, training foundation models on such large-scale datasets becomes computationally intensive and time-consuming. Assuming the constructed POI dataset is represented using a graph modality, addressing the challenges of scalability and granularity will require innovation on multiple fronts. From a modelcentric perspective, researchers may explore more efficient deep learning architectures that scale to massive graphs. From a datacentric view, graph condensation [11, 14, 15] techniques offer promising pathways to reduce the size of graph datasets while retaining key spatiotemporal structures, enabling efficient training without sacrificing model performance. Challenges: Future research must address the challenge of learning flexible representations across varying spatial granularities. Depending on the context, a system may need to infer user intent at the level of a neighborhood, city, or even state. Supporting such seamless multi-scale inference remains a key open problem in geospatial AI. # 4.4 Model Pre-training LLMs and VLMs as vision and language foundation models are typically pretrained on static corpora like Wikipedia or ImageNet [7], where data remains relevant over long periods. In contrast, human mobility data is highly dynamic, shaped by infrastructure updates, policy changes, and events such as pandemics. This temporal fluidity makes static pretraining unsuitable for spatiotemporal graph data. To address this, future research should develop continual and online pretraining strategies that allow spatial foundation models to adapt to evolving mobility patterns without forgetting previous knowledge. These approaches must detect distribution shifts, efficiently update representations, and ensure that models remain aligned with current movement trends. Challenges: A key challenge lies in designing architectures that can effectively model the inherently multimodal nature of geolocation data. This includes integrating spatial, temporal, and semantic modalities such as coordinates, timestamps, movement patterns, transportation modes, POIs, and environmental context. # 5 REAL-WORLD APPLICATIONS Human mobility-driven spatial foundation models can significantly improve spatiotemporal understanding and enable diverse applications. The following examples highlight their transformative potential. # 5.1 Improved Spatiotemporal Analysis Foundation models that combine insights from both geolocation data and human mobility can simplify geospatial analysis and shorten the path from concept to deployment. By capturing where people go and how they spend their time, these models can support a range of use cases, from identifying ideal sites for new businesses and analyzing real estate trends to optimizing logistics and supply chains. They can also enhance socioeconomic studies and benefit sectors like hospitality, especially when tailored to specific populations like travelers. # 5.2 Personalized Place Discovery A personalized geospatial recommender system [9] is a key application of human mobility-driven spatial foundation models. When visiting a new city with no knowledge of the area, users often struggle to find places aligned with their interests. A foundation model trained on mobility patterns and being able to understand places can infer how similar users move through a city and what places they visit. This allows the system to provide tailored, context-aware suggestions, like a hidden café or a popular local spot, without requiring active search. It enhances discovery, supports tourism, and improves user experience in unfamiliar environments. # 5.3 Urban Planning Integrating geolocation embeddings with human mobility information allows urban planners to move beyond static maps and better understand how people actually interact with space over time. This fusion reveals which areas experience high foot traffic, how people transition between neighborhoods, and where bottlenecks or service gaps emerge [16, 21]. They can also play a critical role in disaster preparedness and response, such as predicting evacuation patterns or modeling the impact of events like earthquakes to improve emergency infrastructure and resource allocation [25, 28]. As a result, planners can make more informed decisions about where to place amenities, how to design transit routes, and how to adapt urban spaces to the dynamic needs of residents, ultimately creating smarter, more responsive cities.
Capturing human mobility is essential for modeling how people interact with and move through physical spaces, reflecting social behavior, access to resources, and dynamic spatial patterns. To support scalable and transferable analysis across diverse geographies and contexts, there is a need for a generalizable foundation model for spatiotemporal data. While foundation models have transformed language and vision, they remain limited in handling the unique challenges posed by the spatial, temporal, and semantic complexity of mobility data. This vision paper advocates for a new class of spatial foundation models that integrate geolocation semantics with human mobility across multiple scales. Central to our vision is a shift from modeling discrete points of interest to understanding places: dynamic, context-rich regions shaped by human behavior and mobility that may comprise many places of interest. We identify key gaps in adaptability, scalability, and multi-granular reasoning, and propose research directions focused on modeling places and enabling efficient learning. Our goal is to guide the development of scalable, context-aware models for next-generation geospatial intelligence. These models unlock powerful applications ranging from personalized place discovery and logistics optimization to urban planning, ultimately enabling smarter and more responsive spatial decision-making.
[ "cs.AI" ]
# 1 Introduction Filling out paperwork is a pervasive and tedious task. Although some paper forms have been replaced by fillable rich-text PDFs, many are only available as pure images either in their original format or as scanned physical documents. These forms represent the most challenging task because agents can only interact with the document as an image rather than the information-rich DOM or PDF typeset text and vector graphics. This task builds on prior work on document understanding, OCR, localization, and agentic workflows to evaluate end-to-end image manipulation accuracy. In this work, we propose a new benchmark for evaluating the ability of general-purpose visionlanguage agents (VLAs) to perform end-to-end form completion. Our evaluation focuses on realistic use cases where an agent must interpret a document and populate fields based on a user profile. Relevant user information is provided as raw text, a SQL database, or other completed forms containing partially overlapping responses. Across four tasks involving these inputs, we find that current baseline VLAs score under $3 \%$ accuracy in all but one case. GUI agents also struggle with this task, completing at most $3 . 9 \%$ of fields in the hardest Doc Transfer task. Among the steps involves in form-filling, we find that VLAs primarily struggle with text placement. GUI agents struggled with text placement, mulit-step actions, and completion within the allotted time frame. To address the localization bottleneck, we introduce a modular architecture that separates semantic understanding from spatial grounding. Specifically, we equip any VLA with the ability to name the field it intends to complete, e.g., “Date of Birth”, and delegate the task of locating the corresponding input area to an auxiliary VLM FieldFinder tool. FieldFinder predicts the bounding box of the target field’s input region (e.g., an empty line, cell, check box, or empty space next to the target text). VLAs, when equipped with FieldFinder, improve accuracy by as much as 54 percentage points. Our contributions are as follows: A benchmark for evaluating agents on realistic form completion scenarios, showing that current VLAs struggle to accurately identify field placements. An open-vocabulary field detection model, showing that it helps VLAs overcome spatial reasoning limitations. We intend to release both publicly on GitHub. # 2 Related Work Several benchmarks exist for evaluating document layout understanding (Zhong et al., 2019; Pfitzmann et al., 2022; Li et al., 2020, 2019; Harley et al., 2015; Li et al., 2019). Numerous visionlanguage (Xu et al., 2020; Li et al., 2021; Bao et al., 2020; Appalaraju et al., 2021; Lee et al., 2022) First Name: Johe Occupation: Researcher GU G Citizen Yes □No lam 23 years old Are you acinen? Yes My phone number is Full Name 555-555-5555 555-555-5555 Phone Number have been proposed for these types of tasks. Unlike traditional QA-style benchmarks, VLA evaluations generally measure a path-independent endstate, such as in Zhou et al. (2023), Zheng et al. (2022), Liu et al. (2023), Yao et al. (2024), and He et al. (2024), which often include elements of form-filling. Existing software, such as Mac OS Preview and Amazon Textract, can localize text fields in PDFs. However, they sometimes fail to identify non-underlined fields, including table cells or those indicated merely by a colon (e.g., "Name: "). In contrast, our work builds on these domains to explore end-to-end, real-world form completion. # 3 FormGym: Realistic Form-Filling for Agents We aim to evaluate whether VLAs can produce completely filled forms when given access to user data and image editing tools. FormGym includes a diverse set of forms, user profiles, and agent actions representing a range of realistic challenges. # 3.1 Documents Our task consists of four document tasks. The Auto Loan Task - Text task consists of four densely annotated American vehicle loan application forms containing a total of 357 input fields. To enable evaluation on multiple user profiles (see below), we annotate each field with the type of user information (e.g., full name) it should contain rather than a specific answer (e.g., John Doe). For each form, we provide four user profiles. User profiles contain atomic facts, such as first name and postal code. As a result, many fields, such as address or middle initial, do not map directly to user profile information and instead must be derived from one or more user profile facts. In the case of the Auto Loans - Doc Transfer task, we provide the facts in the form of another Auto Loans source document, densely completed with user information. Information not available in the source document is provided in natural language. The Database Task consists of 49 fields on two commercial banking forms. We provide the content of 39 of these fields in a SQL database that agents must query. Several of these fields are not provided in the SQL database so must be calculated arithmetically from values in other fields according to instructions on the form. Finally, we contribute the FUNSD Task for evaluating diverse formats and multilingual reasoning, derived from Jaume et al. (2019)’s document relation dataset. The FUNSD Task consists of 50 examples from the FUNSD test set with exactly one target answer field masked in each document. # 3.2 Actions To edit forms, we provide agents with the following actions: • PlaceText $( \textsf { \textbf { x } } , \textsf { \textbf { y } }$ , value) Place the text value centered at the coordinates $( x , y )$ . • DeleteText $( { \bf x } , \mathrm { ~ \bf y ) ~ }$ Delete all input text whose bounding boxes contain the coordinate $( x , y )$ . • SignOrInitial $( \textsf { x } , \textsf { y }$ , value) Place the value at coordinate $( x , y )$ in the form of a signature or initials. • QuerySql(query) Query the SQL database in the Database Task using query. • Terminate() End the current session. # 3.3 Flows We evaluate agents under two workflows: One-shot - The agent must place all text at once. Iterative - The agent may take multiple sets of actions over the course of up to 10 rounds, allowing it to correct mistakes. We report additional details in Appendix A.2. # 3.4 Evaluation Each field is also associated with a correctness function to provide fair evaluation of answers with multiple correct formats, such as telephone numbers. If a field contains multiple text inputs, we concatenate them. We choose field accuracy as our primary evaluation metric, ignoring those that should be empty according to the ground truth label to avoid inflating accuracy. A text input is considered to be inside a field if the center point of the text is within a designated bounding box. # 3.5 Baseline Agents We experiment with both classic VLAs and GUI agents capable of interacting with browser and desktop applications. # 3.5.1 Vision Language Models We prompt VLAs with API documentation, examples of all available actions, and a natural language descriptions of the user profile (Appendix A.4). # 3.5.2 GUI Agents We instantiate GUI agents Claude Computer Use and OpenAI Operator with the free in-browser photo editing application Photopea1, whose interface is nearly identical to Photoshop (Appendix A.3). We prompt GUI agents with natural language user profile descriptions and instructions to complete the form. For accessibility and cost reasons, we limit operators to five minutes per page. Prompts include detailed instructions on how to use the Photopea interface, without which GUI agents fail completely (Appendix A.5). # 4 FieldFinder We observe that large baseline VLAs make coherent API calls, but universally struggle to place text in appropriate locations. To ameliorate this issue, we create the FieldFinder tool. FieldFinder takes a form image and text description of the name of the target field as input and predicts the bounding box around the valid input space (Figure 2). # 4.1 Dataset To train the FieldFinder tool, we create a (Document, target field name, bounding box) dataset using question/answer relations in the FUNSD and multilingual XFUND (Xu et al., 2022) form understanding datasets. Since FUNSD and XFUND forms contain responses in answer fields, we use horizontal inward content aware fill 2 to automatically remove text while generally preserving formatting such as lines and table boundaries. # 4.2 Training We fine-tune a Florence 2 Large (Xiao et al., 2024) vision foundation model to predict the answer bounding box coordinates given the target question text and document. We choose Florence 2 because its pretraining contains both open-vocabulary object detection and tasks requiring OCR, minimizing the distribution shift between pretraining and fine-tuning. Florence 2 Large has only 0.77B parameters, contributing minimal latency and memory overhead when augmenting with much larger VLAs. We train the FieldFinder for 4 epochs using early stopping, batch size 8, learning rate 1e-6 on 1x NVIDIA A100 GPU for approximately 20 hours. The FieldFinder achieves an intersect-over-union of $2 0 . 9 \%$ on the FUNSD test set. Figure 2: Agent use of the FieldFinder tool. 1) The agent ingests an input form or database. 2) The agent requests the location of an empty field by name. 3) The FieldFinder returns the bounding box around the target field to the agent. Table 1: Generated by Spread-LaTeX Table 2: Total form pages, fields whose values are supplied in natural language, supplied in a dataabase, and user profiles in FormGym tasks. # 5 Results Overall, VLAs struggle with this task, with models performing best on FUNSD and worst on Database (Table 3). Baseline models generally score $\leq 1 \%$ , except for Claude on FUNSD and Database ( $32 \%$ and $2 . 7 \%$ , respectively). When introducing FieldFinder, we observe equal or better performance in all cases. In the best case, GPT-4o’s performance on FUNSD increases from $2 \%$ to $56 \%$ . We observe smaller gains, up to 16.9 percentage points on Auto Loans (GPT-4o), and 29.3 points on Database (Claude 3.7). Certain small, open-source models including Aria 25B and Molmo 7B achieve significant performance improvements with FieldLocalizer. GPT-4o and Claude also struggle to chain actions in the more complex Doc Transfer and Database tasks. GPT-4o performs especially poorly, suggesting the user query the database herself, then signing a page footer with "Your Name". Table 3: Average form completion percentage (correct fields / all fields). Iterative FUNSD is omitted because FUNSD forms contain only one empty field. One-shot Database is omitted because at least two turns are necessary. Molmo is not trained for multi-image prompting Across all tests, GUI agents performed as comparable or better than VLAs, except in Doc Transfer. Although GUI agents still made localization errors, these were typically less distant than those of VLAs. GUI agents often did not complete the Auto Loans and Database tasks within the 5 minute timeframe, negatively impacting completion. Although Claude Computer Use was more accurate than OpenAI Operator, it performed actions about half as fast, bottlenecking completion. # 6 Discussion We attribute weak baseline model performance to several failure modes. The inability to localize answer fields and chain actions are the primary weaknesses in Claude and GPT-4o. Although Auto Loans contains 357 graded fields, Claude and GPT4o make as few as 71 placement attempts in some cases, suggesting a failure in document understanding and completeness tracking. Claude and GPT-4o also struggle to recover from mistakes. Although they are provided with an API to delete text, its usage is vanishingly rare. When using FieldFinder, accuracy on FUNSD is uniformly higher than on other tasks. We attribute the performance discrepancy to several factors. First, FieldFinder was trained on FUNSD, so testing on Database and Auto Loans represents a significant distribution shift in inputs. Second, Auto Loans requires differentiating between relationally complex fields, such as "applicant first reference name" versus "co-applicant second reference name", indicated by physically distant table headers. To model the upper limit of the impact of FieldFinder, we conduct an ablation study wherein models are prompted with the exact centroid coordinates of fields. Under these conditions, GPT-4o achieves $7 7 \%$ accuracy and Claude 3.7 achieves $82 \%$ , suggesting field localization errors account for about 4/5 errors, while document understanding accounts for the other 1/5. Future work should explore training field localizers on a broader distribution of documents and improving foundational models’ visual reasoning and backtracking abilities. Given GUI agents accurate but sluggish performance, future research should prioritize inference speed and UI generalization with a minor focus on localization. Poor inference efficiency also raises costs, which we calculate to be approximately $\$ 1$ USD per Auto Loans page. We note that without iterative and specific prompt engineering, GUI models perform no successful actions.
Completing paperwork is a challenging and time-consuming problem. Form filling is especially challenging in the pure-image domain without access to OCR, typeset PDF text, or a DOM. For computer agents, it requires multiple abilities, including multi-modal understanding, information retrieval, and tool-use. We present a novel form-filling benchmark consisting of 432 fields spread across 55 documents and 3 tasks, requiring knowledge of 236 features per user. We find that baseline VLAs achieve less than 1% accuracy in most cases, primarily due to poor localization ability. GUI agents also struggle, scoring between 10.6-68.0% despite high cost and latency. Therefore, we also contribute FieldFinder, a tool to assist LLMs in identifying where to place text on a form. With FieldFinder, all models achieve equal or better performance in all six study conditions, with a maximum increase from 2% to 56%.
[ "cs.AI" ]
# 1 Introduction Some criticisms of the current deep learning (DL) paradigm rightly note that current best models and methods are overfit to the popular datasets [1]. The limitations of testing on large datasets have become apparent. For example, performance on ImageNet has exceeded saturation, where models are progressing by learning patterns in the biases that labelers of ImageNet tended to make [2]. This is an indication of a gap in the current testing paradigm. Trained models are tested on data that is similar to the data they were trained on (in-distribution data). Models are evaluated on existing skills and we neglect benchmarking the efficiency of the learning process itself [1]. Alternatively, consider the example of this single riddle from the Abstraction and Reasoning Corpus (ARC) [1], shown in Figure 1. Any system attempting to solve the riddle must work from the three provided examples and should infer that it is necessary to transform each colored square into its corresponding pattern. Due to the simplicity of this dataset’s setup, little pre-training knowledge can be leveraged to solve these riddles. Instead, the solver must “figure out” the transformation based on the few examples provided. Crucially, the model must learn a new transformation, which limits Preprint: Don’t throw the baby out with the bathwater: How and why deep learning for ARC the degree to which the model can rely on pretrained knowledge or zero shot performance. This puts a heavy emphasis on the need for contextual reasoning (reaching the correct associations) during the evaluation. Therefore, this dataset becomes a reliable test of the efficiency of the learning process itself. Figure 1: An example of a single easy ARC task (one datapoint). This task is solved by surrounding the red pixels with four yellow corners, blue pixels with four orange side pixels, whereas cyan and magenta input pixels remain unchanged. Large-scale foundation models (both vision based and LLMs) do not perform well out of the box, when prompted with these problems [3, 4]. Moreover, even though neural nets generally achieve state-of-the-art in natural language processing and difficult visual classification and detection tasks [5, 6], these are perceptual/qualitative-type problems that require highly contextual and dynamic reasoning. There is significant uncertainty in the research community on whether neural nets can ever be trained or enhanced to perform well on ARC or similar datasets [1, 3]. Our perspective that these ARC problems are actually more “perceptual” and qualitative than quantitative in nature. There are no performant quantitative approaches to searching the space of possible input to output transformations since there are near infinite possible transformations even with only a few basic priors [1]. Learning performant abstractions from data is what the deep learning paradigm is known for, specifically when applied on difficult perceptual or qualitative problems. Deep learning solutions produce state-of-the-art results on perceptual problems, for example NLP and vision. The deep learning paradigm consists of an untrained neural network combined with an optimizer (e.g., AdamW, SGD). When these two components are enabled with sufficient amounts of data and compute, highly-skilled (accurate), artifacts are produced. Artifacts that possess the right abstractions for the task at hand. The artifacts we refer to are the trained neural networks. This points us to the idea that this paradigm, namely both the untrained NN and the optimizer algorithm (as opposed to just a well-trained NN) can be what creates the novel abstractions needed for correct predictions on the ARC private test set. Indeed, we are the first to find success on ARC by implementing this combination of optimizer and NN in the evaluation loop. We are able to explore what kind of training data, architecture decisions, model size, and other factors that impact test time tuning and the model’s ability to create abstractions on the fly, to solve novel ARC tasks in the forward pass. We contribute the following: • We motivate and present architecture and pre-training recipe decisions for a performant ARC neural network in subsection 3.1. • We propose the methodology for creating training data for Test-Time Fine-Tuning (TTFT) in subsection 3.2. • We motivate and propose the Augment Inference Reverse-augmentation and Vote (AIRV) and (TTFT) as test-time methods for improving ARC performance (section 3), showing a 2.6 fold increase and a further 3 fold increase over baseline ARC-pre-training in the ARC private set accuracy respectively. • This method helps achieve first place in the 2023 ARCathon competition, and achieves the highest score on the ARC private test set during the 2024 ARC kaggle competition. Unlike previous work, we achieve this on the completely novel ARC problems in the ARC private test set. We achieve the best score in the time and compute restricted kaggle test environment [7]. Preprint: Don’t throw the baby out with the bathwater: How and why deep learning for ARC # 2 Problem and desired properties of a solver # 2.1 Dataset description The Abstraction and Reasoning Corpus (ARC) dataset $\mathcal { D }$ consists of a collection of tasks (also called riddles in this paper) $T _ { i _ { i = 1 } } ^ { N }$ , where each task $T _ { i }$ is defined as follows: • A set of training examples (x(j , yj( )j i=1 , where each $\boldsymbol { x } _ { j } ^ { ( i ) }$ and $y _ { j } ^ { ( i ) }$ are input and output grids, for each task $T _ { i }$ respectively. The input and output grids together are referred to as a grid-pair or example. Both terms are used interchangeably. • A set of test inputs ${ x _ { k } ^ { ( i ) } } _ { k = 1 } ^ { m _ { i } }$ , with corresponding outputs $y _ { k } ^ { ( i ) } { } _ { k = 1 } ^ { m _ { i } }$ to be predicted. Each grid is a 2D array $x \in C ^ { h \times w }$ of variable height $h$ and width $w$ , where $C$ is a set of 10 colors. The number of training examples $n _ { i }$ and test examples $m _ { i }$ are variable across tasks and usually range from 2-6. The objective is to infer a task-specific function $f _ { i }$ such that $f _ { i } ( x _ { j } ^ { ( i ) } ) = y _ { j } ^ { ( i ) }$ for all training examples, and then apply $f _ { i }$ to the test inputs to obtain the test outputs. # 2.2 What does a solver need to excel on ARC? Each riddle is akin to a small dataset of input-to-output examples. A solver must use associative learning as a part of the process of solving the riddle. Solvers must develop associations between individual input and output grids and across the different input and output grid pairs, then apply the relevant learned associations to the given test input grid. In contrast to this, both vision-based meta-learning and natural language based reasoning datasets have a memorization problem. High performing methods on those datasets were found to learn generic features that allow very high levels of accuracy without significant meta-learning. These methods were found to rely more on pretraining knowledge rather than meta-learning to gain new skills on the new tasks provided by these datasets. [8, 9] This is possible in those datasets because the tasks share a lot of common structure, for example the Mini-Imagenet dataset [10], where the subtasks are all image-classification-based and good general object representations can be learned and reused. This zero-shot feature reuse was shown to take place with the MAML algorithm on Mini-ImageNet [8]. These zero-shot models can outperform meta-learning models on these tasks, holding state-of-the-art results [11, 9]. For a good ARC solver, the opposite of this is desirable. Encoding supposedly good features can lead to incorrect assumptions and missing relevant details. Instead, it is desirable to encode a more sophisticated learning process that can reason about the new examples and the possible transformations. In-context-learning (ICL) is an initial candidate here [12, 13]. The ARC dataset is a challenging test of whether a system can perform this more true form of learning from a few examples. A certain amount of flexibility and precision will be necessary within the inner workings of a solver, to satisfy the above requirements and perform with high accuracy. The solver needs to identify how relevant points in the input get transformed, including all the rules involved. The solver then needs to be dynamic enough to not only develop representations, but also access those dynamically created representations to be able to correctly apply them in the new context (test input pair). # 2.3 The associative learning ability in the forward pass of a solver Not all few-shot learning or meta-learning algorithms are well-suited for ARC. Some architectures, such as zero-shot learners with shallow mixing, primarily rely on generating independent deep embeddings for individual grids. These embeddings are then combined in a relatively simple manner to produce an output grid or a classification result. One example of this is [14]. One example of such an ill-suited architecture is Proto-Net [15]. In a straightforward application of Proto-Net to ARC, each input-output grid-pair is embedded separately into a vector, and these vectors are averaged to create a “prototypical” representation. While this approach allows for a simple projection-based inference, it lacks the ability to capture essential interactions across grid-pairs To effectively reason about transformations across multiple examples, a model needs to process all grid-pairs in unison rather than simply averaging individual representations. Without this capability, the model may overlook crucial shared Preprint: Don’t throw the baby out with the bathwater: How and why deep learning for ARC patterns and transformations necessary for solving the task. The limitations of such shallow interaction architectures become particularly evident in tasks where multiple examples must be considered together to infer a general rule. In the proto-net example model, the only cross grid-pair interaction is the averaging of the embeddings. This averaging is not complex enough and even may destroy information inadvertently, due to the diversity of riddle objectives possible. Consider a riddle that mirrors red objects horizontally and blue objects vertically. If the test input contains both red and blue objects, a solver must see and recognize these transformations jointly. Shallow architectures like CodeIt [16] struggle with such tasks because they lack mechanisms for simultaneous reasoning across all grid-pairs. However, incorporating a structured form of associative learning in the forward pass enables a solver to process grid-pairs holistically. This is further discussed in Section 3.1, where we explore how structured cross-grid interactions can significantly enhance generalization and performance. Another example of this, applied step-by-step to an ARC riddle, is worked through in more detail in 3.2 # 3 Solution # 3.1 Solution Part 1: emphasizing In-Context-Learning (ICL) # 3.1.1 Associative learning in LLMs’ forward pass Recent work has uncovered growing evidence that large language models (LLMs) engage in a form of associative learning [13, 17]. For instance, there is now substantial support for the idea that LLMs can identify and utilize Probabilistic Context-Free Grammars (PCFGs), effectively operating as versatile pattern-recognition tools [4]. In natural language, contextual nuances play a crucial role—each word’s meaning can shift dramatically based on its placement within a sentence—so extensive in-context learning appears necessary for these models to generate tokens that are both coherent and context-appropriate. Collectively, these observations indicate that LLMs can establish and exploit relationships among input tokens, an essential ingredient for achieving strong performance on ARC tasks. [18] find that models like BART and T5 can represent and track changing entity states over the course of a narrative (e.g., whether an object is empty or who possesses it), even without explicit supervision. Their analysis also shows that these representations tend to be localized in specific tokens and can be directly manipulated, causing the model to update its subsequent text generation accordingly. Crucially, most of this dynamic-tracking ability comes from extensive open-domain pre-training rather than fine-tuning, indicating that LLMs may possess the requisite capacity motivated in subsection 2.3 for solving analogous perceptual reasoning tasks. Model choice We base our approach on the LongT5 encoder-decoder model [19], leveraging its extended context length to accommodate larger riddles. The T5 family was selected for its sequence-to-sequence capabilities, having been trained on a transformation task from non-causal to causal text [20]. This pre-training instills non-causal attention mechanisms within the encoder, making it well-suited for associative learning. solve: train input1 2999 4299 4442 2922 output1 19 4 4 294 2999 4429 4492 2922. input2 27757 27525 22277 57757 52257 output2 29 5 5 275 27757 27275 27257 55727 52257. test tinput1 4884448 8844844 4844488 4848848 8888484 8448844 8848884 toutput1 55 7 7 48 4884448 8488884 4484448 4888448 8848484 8484844 8848884. Figure 2: An example of the text prompt fed to our model. Here, each input and output grid is unrolled into a flat sequence of pixel-color values, which are then concatenated with keywords such as train, test, input, and output. The phrase solve: indicates that the model should produce the correct transformed grid (toutput1) corresponding to the given test grid (tinput1). To fine-tune the model, we encode each riddle as a single text sequence, where grids are unrolled row-wise, with pixel colors represented numerically and rows separated by spaces. By presenting the complete riddle as a unified input, the model processes all grid-pairs simultaneously, allowing tokens to influence each other, shown in Figure 2. This aligns with the recommendations in Section 2.3. Direct output This direct output methodology stands in contrast to some prior work that instead attempts to produce code as an intermediate output that can then be run to produce the output grid from the input [16]. While this has some benefits, such as being able to produce a solution that can be run and tested, it also has some drawbacks. Producing code that can solve the riddle is typically a much harder task than simply producing the output grid directly [21]. A code Preprint: Don’t throw the baby out with the bathwater: How and why deep learning for ARC based solution must be much more explicit and well defined, and it must be syntactically correct, working correctly with all the different possible input grids. A human solver would make a similar argument, preferring to just solve the riddle directly, and would typically require a much shorter time to do so, compared to coding a working program that solves the riddle. This direct output approach stands in contrast to earlier methods that generate code as an intermediate step, which can then be executed to produce the solution grid [16, 22]. On the one hand, such code-based methods have the advantage of verifiability: running the program directly tests its correctness. On the other hand, producing code to solve a riddle is typically more difficult than simply generating the final grid. [21] specifying a concept or process in full detail often proves more demanding than acting out the game or task itself. A code-based solution must be thoroughly defined, syntactically valid, and capable of handling multiple potential input grids. By contrast, a human solver typically finds it more straightforward and faster to solve the riddle outright than to write, debug a general-purpose program that performs the same task. Generating code as the final output does not fundamentally alter the broad dynamic search process through which LLMs solve riddles—this internal flexibility and reasoning remain essential. However, it does shift the abstraction space, training the model to handle perception and action via programming constructs instead of direct grid outputs. A notable benefit of code generation lies in its strong validation: one can run the generated program on the provided examples to confirm correctness. Yet, our experiments found that the added complexity of producing a syntactically correct, general-purpose solution introduced extra challenges and did not match the performance of direct output generation in initial trials. # 3.1.2 Multi-task Training Multi-task training compels the model to manage multiple modes and contexts simultaneously, thereby reducing its reliance on memorizing individual task details and nudging it toward genuine contextual reasoning [23]. In our setup, we integrate additional tasks requiring high levels of contextualization and reasoning—drawn from various NLP datasets—alongside the ARC data. This approach boosts ARC performance, mirroring findings by [23], who demonstrated that vanilla transformers can exhibit robust learning-to-learn capabilities when trained on a sufficiently large and varied set of tasks. Although [23] employed simpler permuted vision tasks rather than abstraction- or reasoning-based datasets, their conclusion that scaling task diversity helps escape the “memorization regime” remains consistent with our observations. # 3.1.3 Code pre-training and contextualization We observe that training on coding tasks offers a more pronounced performance boost on ARC than merely adding multi-task training derived from language or NLP domains. It is generally easier to continue a sentence halfway through a paragraph than a code file halfway. Code datasets inherently demand meticulous attention to detail and context, requiring the model to keep track of variables and resolve dependencies. This greater focus on accuracy and hierarchical reasoning means that memorization alone is insufficient—an important distinction from many NLP tasks where world knowledge and memorized associations play a larger role. As noted by [24], coding data “is more logical and less ambiguous” ultimately fostering a better focus on context. A more complete discussion of recent literature supporting the use of code data in LLM training can be found in section 5. # 3.1.4 Automatic Riddle Generators Programmatic riddle generation is a valuable strategy to expand ARC training data and enhance model learning. To facilitate this, we employ Domain Specific Language (DSL) techniques, drawing inspiration from the work of Andreas Koepf [25] and Johan Sokrates Wind’s (Icecuber) DSL [26], to construct synthetic riddles by sampling function names and their parameters. A key aspect of our training approach involves training the model to infer these underlying DSL function names and parameters from the input riddle grids, in addition to predicting the final output grids. This dual-prediction strategy, where models learn to predict both the output grid and the DSL function names, contributes to more robust performance compared to training on either target alone. While DSL-generated data represents a portion of our overall training corpus, it serves to illustrate important data generation concepts we utilize. To further diversify our ARC-specific training data, we also employ various more traditional riddle generators. These generators produce complete input-output pairs and test grids, enriching our training dataset. We observe that if riddle examples leave aspects of the transformation underspecified, the model may inadvertently learn to encode these ambiguities directly into its weights, rather than inferring them contextually. This can lead to undesirable biases and Preprint: Don’t throw the baby out with the bathwater: How and why deep learning for ARC eliance on memorization. To mitigate this, we deliberately err on the side of overspecification in our generated riddles, ensuring sufficient information for the model to unambiguously determine the intended solution. We give a more detailed description of the synthetic riddle generators in Appendix A. # 3.2 Solution Part 2: Optimizing in the Evaluation Loop (Test-Time Fine Tuning) During evaluation, we leverage each test riddle’s demonstration examples to create synthetic training data. Specifically, we select a grid pair from the riddle’s provided examples and repurpose it as a new “test example” forming a new, smaller riddle. We have the answer to this new riddle, as it was taken from the demonstration examples, and so we can use that to train the model at test time. To get a lot more data and ensure this riddle differs from the original, we apply several augmentations: • Color permutation: Randomly shuffle the color labels throughout the riddle. • Spatial transformations: Rotate, flip, or transpose the input and output grids, sampling from the dihedral group $D _ { 4 }$ . • Shuffling: Randomly reorder the input demonstration examples. We then perform a brief round of fine-tuning on these augmented riddles before generating predictions for the test grids. This procedure can be seen as a form of test-time training [27] and is referred to here as Test-Time Fine Tuning (TTFT). # 3.2.1 Motivation for TTFT Through Iterative Reframing The central idea behind TestTime Fine Tuning (TTFT) is that the solver may initially make mistakes on the private test set, and we can exploit that feedback to refine its approach. Much like a human solver, the model can re-evaluate and iterate on potential solutions. For instance, consider the scenario illustrated in Figure 3, where the correct transformation is to select the color of the line that does not intersect Figure 3: An example of a simpler ARC riddle. others. A solver or human might instead begin by hypothesizing that the intended solution is to choose the color of the thinnest line, perhaps because the first few examples happen to support that interpretation. This kind of oversight often arises from limited attention or inadequate depth of processing: the model (or a human) fails to fully observe all relevant information and thereby locks into a flawed “framing.” Once the solver is committed to an incorrect framing (for example, consistently searching for the thinnest line), it will have to continue to discard potentially crucial data and remain blind to the correct pattern. The input grids have already been processed with the incorrect framing/bias. In such a case, only a complete reprocessing of the grids—where the correct framing is established from the outset—can guide the solver to correctly infer the solution. Proper framing is consistently shown to be critical for perceptual models, greatly influencing performance. One illustration of this appears in [28], which observes that language models often ignore information in the middle of a prompt, yet perform significantly better when the framing (in the form of the question or key instruction) appears at both the beginning and end of the prompt. Another relevant example is instruction tuning in LLMs [29, 30], wherein models are trained to adopt a “helpful” framing instead of a purely next-word prediction mode. Hence, mechanisms for “reframing” are crucial in solving perceptually rich tasks, especially in the ARC setting, where each riddle’s solution demands a tailored framing. TTFT provides a means to adapt these frames based on feedback derived from newly generated training data—mirroring the human process of iteratively revisiting and adjusting hypotheses until the training examples are correctly solved, before finally addressing the test grids. Why take full parameter update steps (Full fine tuning) Although several lighter-weight alternatives exist for adapting models to downstream tasks—including chain-of-thought prompting [31], few-shot prompting [32], and low-rank adaptation methods [33]—we chose full parameter updates primarily due to simplicity and reliability. Training the model using full parameter updates is naturally powerful enough to generate the needed abstractions. Given ARC’s Preprint: Don’t throw the baby out with the bathwater: How and why deep learning for ARC demand for generating diverse and genuinely new abstractions at test-time, we opted for this straightforward, guaranteed method of update, even though other adaptation techniques may also offer promising results and sufficient updating power. # 3.2.2 Attention and masking We specifically choose encoder-decoder architectures because they incorporate non-causal (unmasked) attention within the encoder, allowing each token to simultaneously attend to the entire input sequence. This capability is critical for enabling the model to fully interpret and contextualize ARC riddles from the outset. By contrast, if the riddle were presented using causal (masked) attention, tokens appearing earlier in the sequence would not have access to the complete context, preventing them from forming accurate early-stage representations or hypotheses simply due to lack of available information. Tokens representing input grids would be unable to attend forward to their corresponding output grids, significantly limiting the model’s reasoning about intended transformations. To validate the practical importance of this non-causal attention mechanism, we experimentally compared our encoderdecoder approach against similarly sized causal decoder-only models and found that the encoder-decoder structure yielded substantially better performance. We were not able to run experiments to disambiguate whether this is due to the non-causal attention masking or the encoder-decoder architecture itself, but its likely that the non-causal attention is the main factor here. # 3.2.3 Specialization to the riddle Test-time fine-tuning can also enhance the precision required to produce completely accurate outputs. Even when the model correctly identifies the transformation function, minor execution errors—such as inaccuracies of a pixel or two—may occur. These errors are likely due to limited model depth or capacity, restricting the model’s ability to execute transformations perfectly on the first attempt. TTFT can mitigate these issues by adapting the model specifically to the current riddle, refining its “execution” capabilities, and enabling it to achieve precise, pixel-perfect outputs. # 3.2.4 Beam Search for Solution Space decoding Because our model generates output grids autoregressively, a purely greedy decoding strategy is brittle: even a single incorrect token leads to an unrecoverable trajectory. Beam search [34] addresses this by maintaining multiple candidate solutions simultaneously, pruning all but the most promising branches at each decoding step according to cumulative probabilities. This strategy allows the solver to effectively handle cases where the correct next token may initially have a lower probability but becomes clearer in subsequent steps. Conversely, incorrect trajectories naturally lose confidence as they proceed; a well-calibrated model will assign increasingly uniform probabilities across candidate tokens when uncertain, causing these erroneous paths to be rapidly discarded. Although models can occasionally become confidently wrong, beam search generally remains beneficial [34], and the ARC dataset in particular reaps substantial advantages from this capability: each riddle has precisely one correct solution, amplifying the divergence between correct and incorrect paths under beam search. # 3.3 Augment, Inference, Reverse augmentation and Vote (AIRV) We propose a test-time augmentation strategy called Augment, Inference, Reverse-Augmentation, and Vote (AIRV). The procedure begins by applying a spatial transformation to the input riddle (e.g., rotation or flipping). We then get predictions on the transformed riddle to obtain a predicted output grid, which is subsequently reversed back to the original orientation. Finally, we gather multiple such predictions from different spatial augmentations and use a voting scheme to select the most frequent (or most confident) output grid. Unlike beam search [34] or temperature sampling [35], AIRV can generate duplicate predictions (after reversing). This enables a voting mechanism that amplifies strong, consistent solutions and filters out noisy variants. This is particularly useful in the ARC setting, where each riddle has only one correct answer. Assuming a somewhat competent model, a voting mechanism then provides a very effective way to make salient the more dominant and consistent grid predictions (the more dominant solution ideas) from other more noisy predictions. In our opinion, this works because given a reasonably performant model, there are many ways that solutions can be incorrect, while there is only exactly one correct solution. This can be seen as analogous to the clustering step in the AlphaCode methodology [36] where the generated programs are clustered and the most common program cluster is selected as a proxy for most likely correct program. Preprint: Don’t throw the baby out with the bathwater: How and why deep learning for ARC Figure 4: AIRV process applied to a simple ARC riddle. Starting from the original riddle (blue panel), the pipeline $( I )$ Augments the grids via rotations and flips, (2) runs inference on each transformed instance, (3) reverses every prediction back to the original frame of reference, and finally (4) votes on the most consistent output. # 4 Results # 4.1 ARC dataset splits The ARC dataset consists of 400 training riddles, 400 public evaluation riddles, and 100 private evaluation riddles that are not accessible to the public [37]. The training set riddles are the easiest, then public evaluation riddles are harder, and finally the private test set have been shown to be harder than the public evaluation set [38]. # 4.1.1 Testing setup We report our results on the private test set, with the following test-time compute limitations imposed by the competition compute environments available [7, 37]: Namely 2 hours of runtime on a single P100 GPU (16 GB VRAM). Accuracy between predicted output grids and the ground truth output grids is measured by only counting exact matches over the whole predicted grid, allowing for two grid attempts per task (top-2). # 4.2 Analysis and Discussion To evaluate the effectiveness of our methodology, we compare the following configurations: • Zero Shot (No TTFT/AIRV): Direct prediction using the ARC-trained LongT5 model. • AIRV Only: Applying the AIRV technique (with beam search decoding) but without TTFT. • TTFT $^ +$ AIRV: Combined use of TTFT and AIRV (with beam search decoding). We also train small and Large LongT5 variants on our training data. Due to pre-training compute constraints, only train those models on around $10 \%$ of the total training data. Preprint: Don’t throw the baby out with the bathwater: How and why deep learning for ARC Long T5 Base Model (Full Training) Figure 5: Results for the fully trained base model in the 3 different test-time configurations. Figure 7: Performance on the ARC’s private test set. Each subfigure analyzes the impact of the different configurations of the test time techniques introduced. The small and large models trained on a subset of the training data are compared to the base model trained on the full training data. The effect of increasing model size is contrasted with the effect of increasing the number of training examples across the different test time techniques. We see that, model size has a significant impact despite a significantly reduced training set, except for when TTFT implemented at test time. AIRV: Augmentation and voting also enable higher performance. Absolute AIRV gains show positive scaling with model size. AIRV alone can gain up to $260 \%$ (in the base model, fully trained scenario). TTFT: Test-time fine-tuning significantly increases the model’s score, results are consistent across model sizes and training regimes. Performing TTFT before running inference with AIRV leads to an additional $300 \%$ gain in performance. An early version of this approach achieved first place in the 2023 ARCathon [40]. A version of this approach with more optimizations and extended ARC training, achieved the highest score on the ARC-AGI private test set in 2024 $( 5 8 \% )$ [39]. Preprint: Don’t throw the baby out with the bathwater: How and why deep learning for ARC Experiments on model size The Large model gets a lower score on TTFT but a higher score on Zero Shot and AIRV only, even with the lighter training. We see this trend holding across small, base and large models, where zero shot and AIRV only performance trends with size of the model. This scaling behavior is typical in deep learning [41, 42], but in this testing regime (ARC), can be explained by the fact that the larger the model the bigger and more expressive forward pass will be (more, and wider layers means more associations that can be made in the forward pass). This possibly accounts for the increased performance in the AIRV and zero shot setting, aligning well with the motivation in subsection 3.1. Benefits of increased pre-training on the ARC dataset did not beat the benefits of simply increasing the model size. This perhaps indicates that the forward pass flexibility of the models is impacted much more by the model size compared to pre-training. This aligns with general theory regarding model scaling laws and how they impact typical reasoning benchmarks [42, 12]. The effect of pre-training on TTFT Interestingly, while increased pre-training does not seem to beat the effect of simply increasing the model size, when it comes to zero shot and AIRV only performance of these models, it does significantly improve score in the $\mathrm { T T F T } + \mathrm { A I R V }$ setting compared to the base model. This effect is not likely only due to that larger models take more time to train (and TTFT) as the base model still sees a much larger boost in performance $( 3 0 0 \% )$ from TTFT than both large and small models ( $140 \%$ and $240 \%$ respectively). We discuss why this may be in section 4.2. Contextualization at Test Time vs. Pre-Training on ARC Riddles We have motivated why high quality contextualization is a crucial element for tackling the ARC dataset. Yet, our experimentation shows that substantial pre-training on ARC riddles remains indispensable for achieving state-of-the-art performance with TTFT, culminating in our 2024 highest score on ARC-AGI. While the space of possible ARC transformations is vast, pre-training does not merely “leak” memorized solutions. Instead, it imparts both the foundational “core knowledge priors” described by [1] and a range of more subtle but highly important priors. By this, we refer to heuristics such as a preference for simpler, human preferred transformations (the “simple/simpler transformation” bias), a drive to validate transformation hypotheses across examples (the “looking for confirmation” bias), and even the basic notion that each example is formed by a paired input and output grid. Without these biases deeply embedded in the model weights, the solver would be far less efficient in forming or testing hypotheses during inference, and a lot of the forward pass would be spent at just merely realizing these basic things about the problem setup. This is further supported by the following recent findings. [43, 44] have demonstrated that predictive features emerge at different layers during pre-training, with simpler but equally predictive features appearing earlier in the network. Recently, [45] also show that longer pre-training allows for complex but predictive representations to “sediment” (move into the earlier layers). We hypothesize that the extensive ARC pre-training allows for more “room” for test time features to emerge (during test-time fine-tuning), because the base arc priors have sufficiently sedimented into the very early layers. This sedimentation process, may be the crucial key for enabling the model to handle more complex, task-specific reasoning when test-time fine-tuned on unseen riddles. Contrasting pre-training with program synthesis These priors are particularly relevant when considering a solver that is based on program synthesis. Program-synthesis-based approaches explicitly encode pair association and transformation-confirmation heuristics by searching for a program that correctly transforms input grids into their corresponding output. While these methods avoids the need for extensive domain-specific pre-training, it still requires significant human intervention to guide the program synthesis algorithm. Specifically, humans must direct the algorithm to search for transformations that align input grids with output grids and ensure that the transformations are correct using the other grid-pairs. Moreover, program synthesis solvers, with their manually encoded heuristics, are generally less effective when faced with perceptual problems that involve an almost limitless range of possible transformations and framings. These are important considerations to keep in mind when considering the trade-offs between extensive pre-training compute and why its necessary, and the explicit program synthesis approach. # 5 Related work Classically, meta-learning can be regarded as a strategy to automate model design or parameter selection across numerous tasks, often formulated as a two-level optimization problem. In such setups, an “outer” model accumulates meta-knowledge, while an “inner” model adapts rapidly to each new task. More recent advances in in-context learning (ICL) have sidestepped explicit inner–outer distinctions, instead relying on the model’s forward pass to perform metalearning. This behavior is commonly observed in transformer architectures trained on data with specific distributional Preprint: Don’t throw the baby out with the bathwater: How and why deep learning for ARC properties [46], enabling impressive performance on various tasks. ICL appears to support a more data-efficient form of meta-learning, one capable of storing and leveraging priors in a flexible way—especially appealing for applications like ARC. A notable illustration of forward-pass ICL exhibiting strong generalization is the MLC architecture [47]. MLC mines for examples similar to the new task on their dataset, and, similar to our methodology, combines all relevant examples into the model’s forward pass at once, allowing it to function as a meta-learner within the forward pass. This design substantially improves performance and generalization, highlighting the potential of ICL-based approaches for tasks that require complex reasoning. # 5.1 Concurrent Work Building on TTFT An interesting replication that is based on our proposed Test-Time Fine Tuning (TTFT) approach can be found in the work of [48], who explore the idea of adapting models on-the-fly for ARC tasks. Their method explicitly incorporates a similar short fine-tuning phase during inference on ARC, closely mirroring our TTFT paradigm. This aligns with our findings that dynamic adjustments at evaluation can significantly enhance performance, especially when encountering tasks requiring newly discovered transformations or abstractions. Also based on [49, 50] their methodology and results strongly corroborate and align with ours. Building further upon our proposed TTFT approach, [51] recently explored the interplay between inductive and transductive reasoning specifically within the ARC domain. Their study trains neural networks on synthetic datasets generated from Python-based implementations of ARC transformations, based on [49], they also use TTFT for their transductive domain. Their experiments highlight that inductive program synthesis excels in precise symbolic tasks, whereas transduction demonstrates strength in more perceptually oriented problems. By effectively ensembling these complementary models, their approach achieves strong results, strongly validating the effectiveness and flexibility of TTFT-based adaptation to achieve high performance. # 5.2 Code data in LLM training Emphasizing coding and code training in LLMs is not new. Coding datasets form a significant part of LLM pre-training corpora, even in non-coding based models [52, 53]. It is correlated with improved performance on reasoning tasks, [54] find that code based models consistently outperform text based models in reasoning, even on synthetic reasoning tasks formulated as natural text. Further, [24] show that pre-training LLMs with a mix of text and code increases the general reasoning capability of the model. They also show that code at the instruction tuning stage enhances task specific reasoning. [55] also show that code models, even when outputting text, outperform text models on few shot structured reasoning evaluations. More recently, [56] carefully ablate the effects of code-data in pre-training and find positive effects on compositional tasks like semantic parsing and mathematics. # 5.3 ARC dataset’s related work The Abstraction and Reasoning Corpus (ARC) dataset [1] presents a significant challenge for artificial intelligence systems due to its emphasis on reasoning from minimal examples. Numerous approaches have been proposed to tackle ARC tasks, ranging from leveraging LLMs to developing specialized neural architectures and neuro-symbolic methods to brute force search based methods. # 5.3.1 Evaluating LLMs on ARC Tasks Several studies have explored the capabilities of LLMs on ARC tasks without additional training. Mirchandani et al.[57] investigated whether LLMs can act as general pattern machines by providing the entire ARC task as context to GPT-4 and GPT-3.5. They achieved an accuracy of $1 0 . 6 \%$ on the combined ARC dataset and $6 . 7 5 \%$ on the public test set, indicating limited performance. Similarly, Mitchell et al.[58] compared GPT-4 and GPT-4V to human performance on a simplified version of ARC called ConceptARC [59], finding that GPT-4V did not significantly improve performance over GPT-4 and that both models underperformed compared to humans. Other works have attempted to improve LLM performance by altering input representations. [60] translated visual ARC tasks into textual descriptions to leverage LLMs’ reasoning capabilities, achieving $20 \%$ accuracy with GPT-3 on the ARC training set. [61] emphasize the importance of object-based representations, introducing an object-centric encoding to feed into LLMs. They tested GPT-4 on the easiest 50 ARC tasks and solved 23 out of 50. Preprint: Don’t throw the baby out with the bathwater: How and why deep learning for ARC These studies highlight that frozen LLMs possess some pattern recognition abilities and may possess some associative learning abilities out of the box in their pretrained forward pass, but they struggle with the abstraction and reasoning required for ARC tasks. # 5.3.2 Neuro-Symbolic and Program Synthesis Approaches Another line of research focuses on combining neural networks with symbolic reasoning or program synthesis to solve ARC tasks. Wang et al.[62] proposed a method where GPT-4 generates high-level textual hypotheses about the task, then translates these into code to solve the task. Testing on a subset of 40 ARC training tasks, they achieved a success rate of $2 7 . 5 \%$ , which increased to $3 7 . 5 \%$ with human selection of correct hypotheses. However, this again relies heavily on the frozen LLM’s forward pass to be powerful enough to do the reasoning required for ARC tasks. [63] introduced CodeIt, a method that generates code based on grid-pairs and uses a self-improving loop during evaluation. CodeIt solves $1 4 . 8 \%$ of the ARC evaluation tasks and runs into the limitations we describe in 2.3. [64] developed Generalized Planning for ARC (GPAR), modeling ARC tasks as generalized planning problems in the Planning Domain Definition Language (PDDL) coupled with external functions representing object-centric abstractions of the grids. [64] achieved $50 \%$ accuracy on a subset of object-centric tasks. [22] used a dreamcoder inspired approach [65] with a domain-specific language for ARC tasks, achieving 18 out of 400 tasks on the ARC evaluation set. These highlighted approaches attempt to incorporate a more symbolic reasoning approach to solve ARC tasks yet can only achieve a limited success rate or work only within a specialized domain. This may be due to the limitations of perceptual reasoning with these symbolic approaches, or may also be in some cases due to the added complexity of fully representing ARC transformations in code or domain-specific languages. [14] utilized neural embeddings and vector arithmetic to solve ARC visual analogies but achieved only $2 \%$ accuracy on the public evaluation set. We discussed this weaknesses of this approach further in 2.3. # 5.3.3 Brute Force Search Icecuber [26] attempts to solve ARC by performing a brute force search over unary functions on pieces of input grids, forming a directed acyclic graph (DAG) of many possible transformations, until a DAG that creates the training output grid is found. This method won the 2020 ARC challenge competition on Kaggle [7]. # 5.3.4 Other Datasets Other works have focused on simplified versions of ARC. [66] state that ARC is impenetrable for the time being and introduce the Sort-of-ARC dataset, which is only limited to $2 0 \mathrm { x } 2 0$ grids and only contains 3 objects with a limited set of transformations only. They emphasize object centric reasoning by using a controller to generate a solution vector, use slot attention transformer to extract object vectors, then update the solution and object vectors in an object centric way before generating the final solution using a spatial decoder on the result. We believe that a purely object centric approach does not generalize to tasks where objects are ambiguous, and the correct “object” is very task specific. [66] achieved $59 \%$ accuracy on out-of-distribution tasks in the Sort-of-ARC dataset. [61] introduce the 1D ARC dataset, which is a simplified version of ARC with only one dimension. They emphasize the importance of object centric input representations and prompt GPT-4 with their object centric representations of the tasks. They state that they strategically select the easiest 50 tasks out of the training set and solve 23 out of the 50 tasks. # 5.3.5 Summary and limitations of related work To summarize, ARC has indeed proven to be a challenging problem for the current paradigm of AI, with state-of-the-art results remaining low on the private test set compared to other datasets in the field, despite 3 recent competitions on the ARC [7, 40]. We identify the following limitations of previous approaches: • Comparisons without computational constraints: Some studies compare their score to others without reporting the computational cost of their methods, making it difficult to make an apples to apples comparison. We rather rely on private test set performance and the computational constraints set by kaggle to avoid this issue. This is a major weakness of related work, we find that our method is much more computate efficient while still achieving state-of-the-art. • Limited Generalization: Many methods perform well on subsets of ARC tasks or simplified versions but fail to generalize across the full spectrum of ARC tasks. Preprint: Don’t throw the baby out with the bathwater: How and why deep learning for ARC • Lack of Contextual Reasoning: Approaches that process grid-pairs in isolation, such as CodeIt [63] and [14] and GPAR [64] struggle with tasks that require understanding relationships across multiple examples. • Lack of focus on perceptual reasoning: Methods that do not focus on perceptual reasoning seem to face difficulties due to the complexity of searching through an almost infinite space of possible transformations. • Evaluation on Public Data: Some studies evaluate their models on the public ARC dataset, which may have been exposed in pre-training data, potentially inflating performance estimates.
The Abstraction and Reasoning Corpus (ARC-AGI) presents a formidable challenge for AI systems. Despite the typically low performance on ARC, the deep learning paradigm remains the most effective known strategy for generating skillful (state-of-the-art) neural networks (NN) across varied modalities and tasks in vision, language etc. The deep learning paradigm has proven to be able to train these skillful neural networks and learn the abstractions needed in these diverse domains. Our work doubles down on that and continues to leverage this paradigm by incorporating on-the-fly NN training at test time. We demonstrate that fully committing to deep learning's capacity to acquire novel abstractions yields state-of-the-art performance on ARC. Specifically, we treat both the neural network and the optimizer (rather than just a pre-trained network) as integral components of the inference process, fostering generalization to unseen tasks. Concretely, we propose a methodology for training on ARC, starting from pretrained LLMs, and enhancing their ARC reasoning. We also propose Test-Time Fine-Tuning (TTFT) and the Augment Inference Reverse-Augmentation and Vote (AIRV) as effective test-time techniques. We are the first to propose and show deep learning can be used effectively for ARC, showing boosts of up to 260% in accuracy with AIRV and a further 300% boost with TTFT. An early version of this approach secured first place in the 2023 ARCathon competition, while the final version achieved the current best score on the ARC private test-set (58%). Our findings highlight the key ingredients of a robust reasoning system in unfamiliar domains, underscoring the central mechanisms that improve broad perceptual reasoning.
[ "cs.AI", "cs.LG" ]
# 1. Introduction Globally detecting long-lasting changes to the Earth’s surface is critical for informing decisions around tackling looming environmental, climate, and conservation challenges [4, 7, 10, 18]. Monitoring deforestation helps to understand where and why forest loss is happening; tracking urban expansion helps to quantify the environmental effects of urban sprawl; and identifying areas impacted by natural disasters like wildfires and earthquakes helps to target disaster relief efforts. Satellite imagery offers frequent views of any location on Earth; for example, the European Space Agency’s Sentinel-2 mission provides images of land and coastal locations across the globe every five days. Numerous computer vision datasets [3, 5, 23] and methods have been proposed for detecting changes in the tens of millions of square kilometers of the Earth that are imaged daily. However, the vast majority of these datasets focus narrowly on urban changes like new buildings and roads. This focus arises because annotating other types of longlasting changes is expensive due to their rarity: most terrestrial locales exhibit minimal change over time, so even within a sizable dataset of satellite image time series, only a handful of time series may show certain changes like forest loss or the impact of wildfires. Furthermore, annotation demands specialized knowledge, as interpreting satellite images (especially outside urban areas) is not always straightforward. Additionally, the sheer number of potential categories of change makes it infeasible to develop a truly comprehensive dataset. Because of these challenges—high annotation costs, the rarity of substantial changes, and the specialized knowledge required—many researchers have resorted to unsupervised methods for detecting changes in satellite image time series. A few unsupervised methods have been proposed, such as CaCo [20], which learns representations that diverge when a location undergoes change. However, these methods struggle to distinguish seasonal changes, such as deciduous trees changing color or crop fields being harvested, from longlasting changes that permanently alter the Earth’s surface. To address these challenges, we propose OPTIMUS (Observing Persistent Transformations in Multi-temporal Unlabeled Satellite-data), a self-supervised learning method for classifying the presence of long-lasting change in satellite image time series. We focus on persistent changes, i.e., changes that have a visible impact on a location lasting longer than one year. OPTIMUS is based on the intuition that if a model can determine the correct long-term ordering of images in a time series, even when the sequence is hidden, it indicates the presence of significant, persistent changes in the images. Otherwise, if the location remained constant, the long-term ordering should be indecipherable. OPTIMUS only requires a collection of unlabeled satellite image time series for training. Figure 1. Overview of binary outputs (red for change, green for no change) generated by OPTIMUS across global regions. The figure showcases key capabilities: localization of changes into $1 2 8 \mathrm { x } 1 2 8$ images (top left) for precise spatial detection, identification of diverse changes such as deforestation in the Amazon Rainforest (red, label ${ \bf \mu } = 1 { \bf \rho }$ ) and shifting sand dunes in Chad (green, label $= 0$ ), and generalization across varied contexts (middle), including urban expansion in a city (red, label ${ \bf \mu } = 1 { \bf \rho }$ ) and lake recession due to drought (red, label $\mathbf { \tau } = \mathbf { \tau }$ 1). OPTIMUS also demonstrates robustness to seasonal invariance, correctly identifying stable conditions such as vegetation changes in agricultural regions (green, label $= 0$ ). Given a collection of image time series, OPTIMUS produces the subset of time series that underwent some longlasting change. This enables users interested in analyzing changes in certain geographies to hone in on the small subset of patches within that geography that changed. In addition to classifying the presence of long-lasting changes, we show that simple extensions enable OPTIMUS to localize the changes both in space and in time within the image time series. Although OPTIMUS does not categorize the changes that it detects, category-specific annotation is substantially cheaper over the filtered subset, making it feasible to develop models that identify rare types of changes. Thus, OPTIMUS can be used to equip decision-makers with reliable data on the locations of changes like wildfires, flooding, deforestation, and so on, to help address and mitigate the impacts of global environmental challenges. We developed a dataset consisting of one million image time series for training, along with a set of time series that we annotated with binary classification labels (has change or no change). Our zero-shot results on the test set show that OPTIMUS significantly outperforms other unsupervised methods, improving the AUROC score at distinguishing changed time series from $56 . 3 \%$ to $8 7 . 6 \%$ . OPTIMUS helps users hone in on locations with persistent changes, allowing for a smaller, more manageable subset of images that can then be manually categorized by experts. This allows OPTIMUS to be a solution for wide-reaching applications in environmental monitoring, urban planning, disaster response, and conservation efforts. # 2. Related Works # 2.1. Existing Datasets Several change detection datasets have been released, combining bitemporal satellite images or image time series with patchwise or pixelwise change labels. OSCD [5], Cropland-CD [15], and SYSU-CD [23] provide binary pixelwise change masks for bitemporal images, with most labels corresponding to new buildings, new roads, and construction work. Some datasets focus on individual categories. EGY-BCD [9], SI-BU [14], and S2Looking [22] have multiclass pixelwise change labels for constructed and demolished buildings. Hi-UCD [24] consists of labels for nine categories of change in images of Tallinn, including newly constructed buildings, greenhouses, and roads. Figure 2. Overview of the OPTIMUS framework for detecting persistent changes in multi-temporal satellite imagery. The framework processes over 30 million satellite images, leveraging a Siamese network architecture to classify whether a query image (Q) is temporally closer to one of two anchor images (A1 or A2). This self-supervised approach generates a diverse dataset of annotated changes, distinguishing ”Change” and ”No Change” events while filtering out seasonal variations. Downstream tasks include environmental monitoring and decision-making, enabling actionable insights from large-scale satellite data. While these datasets are invaluable for change detection, they often focus on urban areas due to the relative ease of detecting changes in these regions. Acquiring labeled data for more complex environmental changes, like forest loss, wildfires, or desertification, is both costly and laborintensive. These changes are subtle, long-term transformations that require specialized knowledge to detect and annotate accurately, unlike the more visually distinct changes in urban areas. Some datasets provide multi-temporal data for land cover types. DynamicEarthNet [25] provides monthly pixelwise labels for seven land use and land cover (LULC) classes, focusing on environmental changes like vegetation shifts and urbanization. DFC2021 Track MSD [13] targets land cover transitions using multisensor data fusion. However, these datasets are often limited due to the cost and are primarily centered around urban areas, making scalability and broader applicability an issue. # 2.2. Supervised Methods for Change Detection Supervised methods for change detection have demonstrated significant success in identifying changes in satellite imagery, but they rely on large amounts of labeled data, which presents scalability challenges. For instance, ChangeNet [11] leverages deep neural networks with pretrained weights, achieving high accuracy by fine-tuning on specific tasks. Despite its strong performance, ChangeNet requires extensive annotated datasets to generalize effectively, which is a notable limitation for broader applications. Similarly, the framework proposed by Wu et al [27] focuses on analyzing image pairs, restricting its utility in detecting long-term changes across extended time series. Additionally, many supervised methods, including Zheng et al’s. [30] framework, are targeted towards urban change detection where data is more abundant. This urbancentric focus further restricts their broader environmental applicability of the models. While pre-trained models [1] can reduce the amount of labeled data needed for downstream tasks like change detection, supervised methods still face significant challenges in terms of generalizability and scalability due to the lack of labeled data, especially for rare categories of change like desertification, shrub regrowth, and logging. # 2.3. Unsupervised Methods for Change Detection Given the scarcity of labeled satellite data, unsupervised methods have gained traction in recent years. These approaches aim to detect changes without the need for annotated datasets, making them more scalable and applicable to various scenarios. SeCo [21] is an unsupervised method pre-training method for remote sensing data that leverages temporal and positional invariance to learn transferable representations. Similarly, SSL4EO [26] utilizes self-supervised learning techniques to learn representations of satellite images that can be used to detect changes. Both SeCo and SSL4EO have shown effectiveness in identifying changes across diverse environments; however, they often require fine-tuning to optimize performance for specific tasks and datasets. On the other hand, methods like CaCo [20] do not require fine-tuning. CaCo uses contrastive learning to differentiate between unchanged and changed areas in satellite imagery by obtaining changes using features extracted from the images. Specifically, CaCo addresses the issue of varying seasonal and location-specific changes by normalizing the distance between feature representations of images. It calculates a ratio of distances between long-term images and short-term seasonal images to normalize scaling differences across different locations. This ratio helps distinguish actual changes from seasonal variations. Despite this sophisticated approach, CaCo struggles with effectively separating long-lasting changes from environments with pronounced seasonal changes due to its reliance on feature distance ratios, which may not always capture the true magnitude of persistent changes. Recent work by Mall et al. [19] introduces an eventdriven change detection approach, which focuses on detecting meaningful change events from spatio-temporal satellite imagery in Cairo and California. Their method addresses the challenge of discovering significant changes from vast amounts of time-series data, similar to our objective of capturing persistent transformations. By focusing on events rather than pixelwise differences, this approach aligns with the need for methods that generalize across broader contexts. Our approach differs from existing methods [30] [2] [28] [29] by leveraging the full time series of satellite imagery in a fully unsupervised manner. This allows our model to better identify persistent changes, as it can learn from the temporal context provided by the entire series. By focusing on persistent changes, our methods aim to filter out large datasets, enabling experts to concentrate on images with significant and lasting alterations. This approach addresses some of the limitations of both supervised and unsupervised techniques, providing a novel method for identifying and monitoring significant environmental changes over time. # 3. Dataset Most existing change detection datasets, such as OSCD and ChangeNet, focus predominantly on urban areas. This urban-centric sampling is adopted to achieve a higher rate of detectable changes, making annotation more feasible. However, this approach limits the variety of changes captured, excluding many non-urban changes such as wildfires, desertification, and agricultural expansion. To address these limitations, we compile a new dataset from publicly accessible sources, Sentinel-2 and NAIP, with the goal of detecting arbitrary persistent changes across diverse geographical locations. Unlike existing datasets, our approach involves randomly sampling image patches from across the entire globe, including non-urban areas, to ensure a broader spectrum of changes is captured. This strategy enables us to annotate a more diverse set of changes, avoiding the bias toward urban areas. To make this global sampling feasible, we focus on classifying time series of image patches for change rather than segmenting individual pixels. This is particularly advantageous for capturing non-urban changes, where drawing precise segmentation labels can be challenging. For instance, changes such as wildfires and desertification can have diffuse boundaries that are difficult to delineate precisely. For model training, we retain only the RGB channels (B2, B3, B4) and applied a cloud cover filter to exclude images with greater than $20 \%$ cloud cover. Each image in our dataset measures $5 1 2 \mathrm { x } 5 1 2$ pixels at a 10-meter/pixel resolution, allowing our models to detect fine-grained changes that may occupy only a few pixels. The dataset comprises one million time series, each representing the spatiotemporal context of a geographic location. Each time series includes 30-48 satellite images captured between January 2016 and December 2023, with a minimum interval of 2 months between any two images. This extensive temporal span of 8 years ensures the dataset captures a wide range of long-term changes, including deforestation and desertification, providing a robust basis for training models to detect persistent changes. # 3.1. Evaluation Set To evaluate our change detection models, we constructed an evaluation set of 300 satellite time series, each consisting of $5 1 2 \mathrm { x } 5 1 2$ images, randomly sampled from our dataset. A random subset of 100 of these images were further divided into $1 6 0 0 ~ 1 2 8 \mathrm { x } 1 2 8$ patches, with each patch receiving a binary label. Each time series in this set is assigned a binary label: positive labels indicate the presence of a persistent and non-cyclic change (i.e., not seasonal changes), while negative labels indicate no change. Approximately one-third of the time series are positively labeled. Given the complexity and diversity of changes we aim to detect, we prioritized the quality of annotations over quantity. To ensure high-quality annotations, we provided annotators with detailed instructions on how to identify and label persistent changes, emphasizing consistency across different types of changes, especially those with ambiguous boundaries, such as desertification or gradual urban expansion. Annotators were instructed to focus on clear indicators of non-cyclic change, disregarding seasonal variations or temporary phenomena. Appendix A provides qualitative examples of the annotated changes, illustrating the types of changes our models are designed to detect. These examples are accompanied by the specific instructions given to annotators, offering insight into the criteria used for labeling. The evaluation task involves the models generating a change score for each time series in the evaluation set and computing the task accuracy. This evaluation closely mirrors practical downstream applications, where the objective is to identify locations with persistent changes over many years. Although using a binary label is a relatively simple approach—lacking the granularity of change categories or segmentation masks—it serves as an effective filtering mechanism and remains effective for types of changes like long-term droughts that have nebulous boundaries. This simplicity makes it cost-efficient to extract a diverse set of changes in future applications. We will release the unsupervised training data, labeled test set, and code (which is also included in the supplementary material) under an open license. # 4. Methods OPTIMUS determines whether remote sensing satellite time series contain non-seasonal changes by hiding the ordering of the images in the series, and then training a model to attempt to recover the long-term ordering. If there are only seasonal changes but no persistent changes, then the long-term ordering should be indecipherable from the images alone. An example is lake levels changing due to seasonal precipitation. As this change is cyclic, while the images can be grouped into seasons, the ordering of the images across years cannot be distinguished. On the other hand, if a location exhibits persistent changes, it should be possible to distinguish all of the images captured before a change from those captured after a change. Road construction is such an example, with distinct stages like laying the subgrade, base, and asphalt. In this section, we first describe a basic implementation of this intuition. We then identify flaws in the basic implementation, and detail how OPTIMUS addresses those flaws. In the basic implementation, given a time series $\langle I _ { 1 } , I _ { 2 } , \ldots , I _ { n } \rangle$ , we train a binary classifier (denoted as $b$ ) to predict whether an arbitrary intermediate image $Q$ is closer in time to $I _ { 1 }$ or $I _ { n }$ . The classifier inputs a tuple $( I _ { 1 } , I _ { n } , Q )$ of the images only, with the timestamp of $Q$ hidden, and outputs a confidence score that $Q$ is closer to $I _ { n }$ than $I _ { 1 }$ . During training, examples are constructed by (1) sampling a time series, (2) picking an arbitrary $Q$ between $I _ { 1 }$ and $I _ { n }$ , and (3) computing the label based on whether $Q$ is before or after In/2. After training, to determine if a new time series contains change, we compute the confidence score from the model for every image in the time series, i.e. we compute a time series of scores $S = \{ b ( I _ { 1 } , I _ { n } , I _ { j } ) | j = 1 , 2 , \ldots , n \}$ . An oracle classifier would output a step function where the score switches from 0 to 1 halfway through the time series. If the time series contains change, then an effective model should perform similarly to the oracle (Figure 3, top left). However, if the time series contains no persistent changes, then the scores should fluctuate arbitrarily, since the model does not have sufficient information (Figure 3, bottom left). Then, we apply measures on $S$ that broadly assess the degree to which it is monotonically increasing. However, there are several flaws with this basic implementation. First, a location may undergo a change at a single timestamp, e.g. trees are logged within the span of a month. Suppose this change occurs between $I _ { 1 }$ and $I _ { 2 }$ . Then, during training, we would be training the model to predict that $\langle I _ { 2 } , . . . , I _ { n / 2 - 1 } \rangle$ are closer in time to $I _ { 1 }$ , even though they are closer in appearance to $I _ { n }$ . Second, if $I _ { 1 }$ or $I _ { n }$ are low in quality, due to clouds, shadows, or imaging artifacts, then the quality of the scores in $S$ would be lowered. Below, we address these flaws, and also detail the measures that we use to capture a final change score based on the confidence scores in $S$ , along with the model architecture. Training Example Selection. To train the classifier, we sample triplets $( A _ { 1 } , A _ { 2 } , Q )$ for each time series in the dataset, and train the model to predict whether $Q$ is closer to $A _ { 1 }$ or $A _ { 2 }$ . Selecting $A _ { 1 } = I _ { 1 }$ , $A _ { 2 } \ = \ I _ { n }$ , and $Q$ as a random image from within the time series is the simplest approach, but has flaws as mentioned above. The first concern is that a change may occur just after $A _ { 1 }$ or just before $A _ { 2 }$ , causing a randomly chosen $Q$ to be temporally closer to one anchor but visually more similar to the other. To address this, during training, rather than selecting a query image $Q$ between $A _ { 1 }$ and $A _ { 2 }$ , we select $Q$ to be before $A _ { 1 }$ or after $A _ { 2 }$ with equal probability; this guarantees that, with respect to persistent changes, $Q$ will always be visually more similar to the same anchor that it is temporally closer to. Second, to address the concern of low quality images due to environmental or photometric distortions, rather than using a single image for the anchors $A _ { 1 }$ and $A _ { 2 }$ , we actually provide the model with multiple consecutive images. Specifically, each anchor consists of $\mathrm { ~ c ~ } = \mathrm { ~ 3 ~ }$ consecutive satellite images from the time series. Using multiple consecutive images for $A _ { 1 }$ and $A _ { 2 }$ reduces the likelihood that all images in a set will be affected by distortions, increasing the robustness of the model. However, there is a trade-off: if the anchor sets are too long, there is a risk that changes may occur within $A _ { 1 }$ or $A _ { 2 }$ themselves, potentially complicating the model’s learning process. Through ablation studies, we determined that using three consecutive images strikes an optimal balance, providing sufficient robustness while minimizing the risk of internal changes within the anchor sets. More details on the input construction can be found in the Appendix C. Model Architecture. For each triplet $( A _ { 1 } , A _ { 2 } , Q )$ , we construct two tensors that the model processes independently. The first tensor is formed by concatenating the query image $Q$ with each image in the anchor $A _ { 1 }$ , resulting in an $3 \times ( c + 1 ) \times 5 1 2 \times 5 1 2$ tensor. Similarly, the second tensor is created by concatenating $Q$ with the images from anchor $A _ { 2 }$ . These tensors are denoted as $( A _ { 1 } , Q )$ and $( A _ { 2 } , Q )$ , respectively. They are then input to the model, which is tasked with predicting whether the query $Q$ is temporally closer to $A _ { 1 }$ or $A _ { 2 }$ . The model is trained using binary cross-entropy loss, defined as: $$ \begin{array} { r } { \omega = b ( ( A 1 , Q ) , ( A 2 , Q ) ) \qquad } \\ { \quad } \\ { L _ { \mathrm { O P T I M U S } } = - y \log ( \omega ) - ( 1 - y ) \log ( 1 - \omega ) } \end{array} $$ where $\omega$ is the output of the binary classifier and $y \in \{ 0 , 1 \}$ is the ground truth label indicating whether $Q$ is closer to $A _ { 1 }$ $( y = 0 )$ ) or $A _ { 2 }$ $( y = 1 )$ ). The model architecture uses a Siamese Neural Network [12] with a Resnet-50 backbone [8] to map the tensors into a shared embedding space. The embeddings are concatenated, passed through a linear layer, and then processed by a softmax function to yield a score between 0 and 1. The model is trained using the AdamW [17] optimizer with a learning rate of $3 \times 1 0 ^ { - 4 }$ , a batch size of 5, and for 5 epochs. The dataset is partitioned into training and validation sets with an $8 0 / 2 0$ split. The training objective and model architecture are depicted in Figure 4. Change Score Measures. Given a time series $\langle I _ { 1 } , I _ { 2 } , . . . , I _ { n } \rangle$ , suppose we calculated the series of scores $S = \{ b ( I _ { 1 } , I _ { n } , I _ { j } ) | j = 1 , 2 , \ldots , n \}$ . We use two measures to quantify persistent changes in the sequence. The first is the Spearman rank correlation coefficient: $$ \rho = 1 - \frac { 6 \sum _ { i = 1 } ^ { | S | } ( \mathrm { r a n k } ( s _ { i } ) - i ) ^ { 2 } } { | S | ( | S | ^ { 2 } - 1 ) } $$ This metric assesses the monotonicity of $S$ by measuring how well the ranks of the scores correspond to their positions in the sequence. The second measure is the pivot score $P$ : $$ P = \operatorname* { m a x } _ { i = 1 } ^ { n } \left| \frac { \sum _ { j = 1 } ^ { i } s _ { j } } { i } - \frac { \sum _ { j = i + 1 } ^ { n } s _ { j } } { n - i } \right| $$ This score identifies the index $\mathbf { \chi } _ { i }$ in $S$ that maximizes the absolute difference between the average values of the segments before and after $i$ . A high pivot score indicates a significant and persistent change in the sequence, typically aligning with abrupt transitions. For a visualization of the scoring process, see Figure 3. In practice, as shown by an ablation study in Appendix C, the pivot score outperformed the Spearman coefficient, so all subsequent results and experiments report scores based on the pivot score. # 4.1. Spatial Localization of Changes OPTIMUS operates effectively as a general framework for large $5 1 2 \times 5 1 2$ images, corresponding to a $5 \mathrm { k m } \times 5 \mathrm { k m }$ region at $1 0 \mathrm { m }$ resolution. However, this extensive spatial context complicates the precise localization of regions or pixels where changes occur. For instance, urbanization might be confined to a small section of the image due to the construction of a few buildings. Running OPTIMUS on smaller patches to localize changes in a time series significantly degrades performance because the model was trained on large spatial contexts and cannot generalize well to smaller patches. A straightforward solution is to train the model on 128x128 patches to adapt to the lower spatial context. However, this approach introduces the challenge of sparse changes when subdividing time series into smaller patches. Most regions in an image do not exhibit change, resulting in a low signal-to-noise ratio during model training. To address this issue, we propose an iterative approach to extend OPTIMUS for spatial localization. Initially, we train OPTIMUS on the entire dataset. We then apply the trained model to filter the top $50 \%$ of time series that show the most persistent changes, thereby increasing the density of changes in the dataset and enhancing the model’s ability to learn from significant changes. Subsequently, we retrain OPTIMUS on this filtered dataset using $1 2 8 \times 1 2 8$ patches. # 5. Experiments # 5.1. Change Detection We first evaluate OPTIMUS against baselines on distinguishing $5 1 2 \times 5 1 2$ time series exhibiting persistent changes from those with no changes in our evaluation set. Baselines. We compare OPTIMUS to three baselines: CaCo, SeCo, and OSCD pre-training. $\mathrm { C a C o }$ is a contrastive method specifically designed to detect long-term changes while being insensitive to seasonal variations. The authors apply CaCo for classifying changes by computing the distance between feature representations. To classify a time series using CaCo, we randomly sampled three image pairs from the first and last years, then computed the average distance between their representations. A larger distance indicated a greater change score, relative to the other time series. Following the authors’ recommendations, we also normalized the distances between feature vectors representing long-term and short-term images, as the feature distances can scale differently depending on the type of location. Figure 3. Analysis of pivot scores and corresponding satellite images for different environmental changes. The top left shows a high pivot score indicating significant urban expansion with a marked spike in change, suggesting a sudden development. The bottom left illustrates ocean images associated with an extremely low pivot score, indicating minimal change. The top right depicts a desert region experiencing reforestation, with shrubs growing back. The bottom right shows a region undergoing deforestation. CaCo [20] is the only recent method geared for change classification, since most methods focus on segmentation. Thus, we make adaptations to compare the other two baselines. Unlike CaCo, SeCo [21] is a contrastive model intended for seasonally variant and invariant downstream applications. As such, it produces embeddings with three independent subspaces, including one variant and another invariant to seasonal changes. Since our task focuses on detecting non-seasonal changes, we explicitly remove the seasonally variant subspace and only use the other two subspaces to quantify changes. Following this, however, the evaluation method for SeCo is kept the same as CaCo. The third baseline consists of pre-training on OSCD [5], a dataset consisting of bi-temporal images paired with change masks. After pre-training, we compare feature representations to classify change. Since CaCo reports the highest performance on OSCD, we use it for the model, fine-tuning it on OSCD following the authors’ training procedure. To evaluate on our dataset, change scores for each time series are calculated as before, using distances between representation vectors. Metrics. For each baseline method, a change score was computed for each series in the evaluation set, and a threshold was applied to distinguish predicted changes from unchanged series. Performance was then measured in terms of accuracy against ground truth labels. Given the varying ranges of score values produced by different methods, the primary performance metric used is the non-thresholded Area Under the Receiver Operating Characteristic Curve (AUROC). In addition, we report the maximum F1 score across all thresholds. Table 1. Comparative Evaluation versus Baseline Methods Quantitative Results. Table 1 shows the performance metrics for all evaluated methods on the evaluation set. OPTIMUS achieves a substantial performance advantage over all other methods. We attribute this to OPTIMUS’s approach of explicit classification of images based on temporal locality, which directly encodes changes into the feature representations. However, contrastive methods such as SeCo and CaCo only indirectly capture changes through differences in feature representations, resulting in less reliable outcomes. Lastly, supervised pretraining on OSCD increases the performance for CaCo, but it still falls far short of OPTIMUS. This gain can be attributed to CaCo’s direct training for urban change detection, which helps detect urban images in our evaluation set. However, the overall effectiveness is limited due to the broader range of changes in our evaluation dataset beyond urban environments. Figure 4. Illustration of the proposed pipeline for fine-grained change detection. Satellite image time series are divided into $1 2 8 \mathrm { x } 1 2 8$ patches to isolate localized changes, enabling the OPTIMUS model to classify each patch as exhibiting meaningful changes (e.g., urban expansion) or no significant change (e.g., seasonal variations). This approach addresses challenges in detecting changes across diverse components within larger images by focusing on spatially localized transformations. # 5.2. Localization In Table 2, we present the results of the localized evaluation for OPTIMUS. Various versions of OPTIMUS are compared to demonstrate the necessity of the iterative training procedure outlined in Section 4.1. The original OPTIMUS model, trained on $5 1 2 \times 5 1 2$ images, performs poorly with an AUROC score when applied to smaller patches. Training OPTIMUS directly on $1 2 8 \times 1 2 8$ images without the iterative procedure results in an improvement in AUROC. However, the best performance is achieved with OPTIMUS trained using the iterative method, which increases AUROC significantly. This improvement is expected, given that the iterative OPTIMUS trains on a dataset with a much higher density of changes. The fairly high AUROC for the iterative OPTIMUS method indicates its effectiveness in localizing regions of interest for potential persistent changes. While the overall performance decreases compared to $5 1 2 \times 5 1 2$ images, this is anticipated due to the reduced spatial context. Table 2. Comparative Localized Evaluation # 5.3. Qualitative Examples Figure 3 provides examples of change score series generated by OPTIMUS on the evaluation dataset, along with the outputs of the pivot scores. It also shows the exact time at which the pivot score detected the greatest degree of change. The results illustrate that the OPTIMUS accurately measures the degree of change and can often pinpoint precisely the location of the greatest change, where the partitioned sets of images before and after the pivot exhibit the greatest disparity. Qualitative examples are located in Appendix B.
In the face of pressing environmental issues in the 21st century, monitoring surface changes on Earth is more important than ever. Large-scale remote sensing, such as satellite imagery, is an important tool for this task. However, using supervised methods to detect changes is difficult because of the lack of satellite data annotated with change labels, especially for rare categories of change. Annotation proves challenging due to the sparse occurrence of changes in satellite images. Even within a vast collection of images, only a small fraction may exhibit persistent changes of interest. To address this challenge, we introduce OPTIMUS, a self-supervised learning method based on an intuitive principle: if a model can recover information about the relative order of images in the time series, then that implies that there are long-lasting changes in the images. OPTIMUS demonstrates this principle by using change point detection methods on model outputs in a time series. We demonstrate that OPTIMUS can directly detect interesting changes in satellite images, achieving an improvement in AUROC score from 56.3% to 87.6% at distinguishing changed time series from unchanged ones compared to baselines. Our code and dataset are available at https://huggingface.co/datasets/optimus-change/optimus-dataset/.
[ "cs.CV" ]
# 1 Introduction In recent years, the advent of Generative Artificial Intelligence (AI) has accelerated the process of developing new software. However, there are studies [20] showing that users who use AI assistants tend to introduce more bugs and vulnerabilities into their code, compared to those who write code on their own. Formal software verification could help mitigate the issue of bugs and security flaws, as it ensures that the software operates correctly and reliably in compliance with the given specification. Under the assumption of a well-formed specification, formal verification provides strong guarantees and an acceptance criterion for the generated code. Interactive Theorem Prover (ITP) is a software tool that assists the user with the development of formal specifications and proofs. To date, there exist several ITPs, such as Rocq (former Coq) [1], Lean [4], Agda [12], Isabelle [18], and others. Rocq is a mature ITP, which has experienced more than 30 years of continuous development and improvement. Rocq has an extensive track record of high-impact projects. For example, Rocq was used to verify the correctness of the CompCert C compiler [14], the only compiler, in which an extensive study found no bugs [32]. Verifying software has always been a rigorous and time-consuming process requiring much human effort. A number of solutions have been developed to help automate the process of theorem proving in Rocq. Proofs in Rocq are constructed from so-called tactics, which are elementary building blocks. Using tactics, the user manipulates the proof state — a data structure, which contains the current goal and the context of the proof. Thus, with every applied tactic, the task is transformed and could be solved recursively. Most solutions implement tactic-prediction approaches and employ beam search or a similar algorithm to navigate the search space. Tactician [3] is a KNN-based approach, which does similarity-based retrieval of tactics used in similar states. CoqGym [30] and Proverbot9001 [24] use Recurrent neural networks, Graph2Tac [23] proposed a novel graph-based neural tactic prediction. Thakur et al. [25] and Kozyrev et al. [13] instead build generation pipelines around general-purpose, cloud-hosted LLMs, so that no heavy computations occur on the user’s machine. CoqPilot [13], along with that, contributes a benchmarking framework and allows seamless integration of standalone tools into the workflow of Rocq’s user. Many approaches call attention to the importance of premise selection, i.e., retrieving useful context information to advance generation. Yang et al. [31] introduced LeanDojo, a retrieval-augmented prover in Lean that significantly improves over non-retrieval baselines. Thompson et al. [26] present the Rango tool and report state-of-the-art performance on the CoqStoq benchmark, automatically synthesizing complete proofs for $32 \%$ of the theorems. The work highlights how strongly the wellformed context contributes to the success of Rango. Moreover, they show that proof retrieval is the most performant mechanism for premise selection. The proof retriever selects relevant previously completed proofs from the current project and provides them as references to the model. According to the evaluation, Rango proved $47 \%$ more theorems than the variant without a proof retriever. However, their mechanism of retrieving proofs relies on the baseline text similarity over states. In this work, we build on top of their research and propose a novel embedding model for Rocq statements. It is trained to predict the similarity of their proofs and shows relative improvement of up to $28 \%$ on the evaluation set. Another promising direction in generative theorem proving that we have identified is Agentic Systems. Research by Kozyrev et al. [13] shows that current Rocq generation methods mostly struggle with complex reasoning tasks. Algorithms that perform proof search on top of a tactic generator slow down dramatically and suffer performance degradation as theorem complexity grows, due to the properties of tree-based search. Other neural methods, which apply LLMs, suffer from the same problem due to the inability of the model to handle complex reasoning tasks [10]. Agentic systems are known to address these problems; however, to our knowledge, there were close to no attempts to build an autonomous agentic system for an ITP. We build an extensive Model Context Protocol (MCP) server for Rocq and implement an autonomous Agentic System over it, utilizing various problem-specific solutions, such as multi-agent debate. We conduct an evaluation and show that our agentic system strongly outperforms all other previously benchmarked solutions in the CoqPilot’ work, raising the ratio of successfully proven theorems from $51 \%$ to $60 \%$ . # 1.1 Contributions The main contributions of this paper are: RocqStar proof retriever We propose a novel approach for premise selection in Rocq. Rocq suffers from the data-scarcity problem that is common to most ITPs. Aggregating the largest publicly available repositories, one could expect to collect roughly 300 million tokens of Rocq, and about the same for Lean. In contrast, open-source Python corpora easily exceed 100 billion tokens. To tackle this issue we contribute a convenient standalone tool BigRocq to extract additional data from Rocq code, utilizing the nature of Rocq’s system and the intermediate states of the proof. BigRocq bridges the gap between Automated Generation and Rocq’s ecosystem. Using BigRocq, we mine a dataset of 76,524 statements with corresponding proofs from 4 big projects and train a self-attentive embedder model, which learns to predict how close the proofs of given statements will be. In addition, we provide a pipeline to reproduce such embeddings for an arbitrary project, which offers even better results. We integrate the solution as a new retrieval approach for selecting context theorems in CoqPilot and evaluate it using CoqPilot’s benchmarking infrastructure. Compared to the baseline text similarity-based ranker, we show an improvement of $28 \%$ on the evaluation set. BigRocq tool, training dataset, and the code for training the embedder model are available at https:// github.com/JetBrains-Research/rocqstar-rag. A model checkpoint is available at https: //huggingface.co/JetBrains-Research/rocq-language-theorem-embeddings. RocqStar agentic system Addressing the lack of research of applying agentic systems to ITPs, we build an autonomous Agent for writing Rocq proofs. A custom MCP server built over coq-lsp [5] handles interaction with Rocq, its code is available at https://github.com/ JetBrains-Research/rocqstar-agentic-system. We implement an agentic system that includes such stages as planning, execution, and reflection. An ablation study demonstrates the critical role of planning, particularly the multi-agent debates (MAD) framework, in boosting performance. Evaluation shows that our end-to-end agent can solve $60 \%$ of theorems from the CoqPilot’s dataset. To deploy our AI Agent, we use privately available infrastructure called IDEFormer, but all our agent’s code is available at https://github.com/JetBrains-Research/ rocqstar-agentic-system/tree/main/rocqstar-agent. The remainder of the paper is organized as follows. $\ S 2$ describes our Similarity-Driven Retrieval mechanism and $\ S 3$ introduces the agentic system. The retrieval component is evaluated in $\ S 4 . 1$ and the agent in $\ S \ 4 . 2 . \ \ S \ 4 . 3 \$ provides an ablation study of the agentic system. We describe the related work in $\ S 5$ and conclude in $\ S 6$ . # 2 Similarity-driven Retrieval A known problem in Retrieval Augmented Generation (RAG), applied to the domain of Interactive Theorem Proving (ITP), is premise selection [27, 8]. Premise selection is the task of retrieving facts from a given knowledge base to help the model advance the proof. Huang et al. [7] and $\mathrm { { X u } }$ et al. [29] highlight the importance of a well-formed context, showcasing that the presence of irrelevant context information degrades the model’s performance. We distinguish two ways of doing premise selection in Rocq. Hint selection — given a context $C$ and a tactic with an unknown positional argument, e.g. apply _, the task is to yield potential candidates for the argument. Proof selection, in turn, given theorem statement $S$ , focuses on choosing other statements with their respective proofs, so that their presence in the context of the generation request would help the model with the generation of the proof for statement $S$ . Most works [2, 11, 26, 31] on premise selection in Rocq and other ITPs focused on doing hint selection. However, Thompson et al. [26] and Kozyrev et al. [13] show that even a baseline proof selection significantly boosts the model’s capabilities and is stronger than hint selection. The baseline proof selection presented in both works [26, 13], given the target statement $^ { s _ { * } }$ and a database of already proven theorems $\left[ s _ { 0 } , p _ { 0 } \right] , \ldots , \left[ s _ { n } , p _ { n } \right]$ , chooses theorems, statements of which have the maximum similarity to the target one. Similarity is defined by the BM-25 information retrieval technique [22] or Jaccard similarity index. Both existing approaches suppose that if statements $s _ { * }$ and $s _ { i }$ are similar, their respective proofs $p _ { * }$ and $p _ { i }$ are similar as well: $$ \mathrm { s i m i l a r i t y ( s * , s _ { i } ) } \Longrightarrow \mathrm { s i m i l a r i t y ( p * , p _ { i } ) } $$ therefore assume that theorems $\{ [ s _ { j } , p _ { j } ] \}$ , chosen in such a manner, are relevant while proving $^ { s _ { * } }$ . However, we show that this implication often does not hold. Let us define the proof similarity $D _ { L }$ as the Levenshtein edit distance on lists of tactics, where the cost of substitution between two tactics is the Levenshtein distance over their strings. We include a Jaccard similarity term and add noise for robustness; otherwise, the proof-distance distribution over randomly selected pairs of theorems becomes U-shaped and the model fails to learn. $$ \begin{array} { r l } & { p _ { i } = [ t a c _ { i _ { 0 } } , \dotsc , t a c _ { i _ { m } } ] , \quad l _ { i } = | s _ { i } | , \quad D _ { L } ( p _ { i } , p _ { j } ) = \frac { \operatorname { L e v } ( p _ { i } , p _ { j } ) } { \operatorname* { m a x } ( l _ { i } , l _ { j } ) } , \quad D _ { J } ( p _ { i } , p _ { j } ) = 1 - \frac { | p _ { i } \cap p _ { j } | } { | p _ { i } \cup p _ { j } | } } \\ & \mathrm { p r o o f } _ { - \} \mathrm { d i s t a n c e } ( p _ { i } , p _ { j } ) = \alpha D _ { L } ( p _ { i } , p _ { j } ) + ( 1 - \alpha ) D _ { J } ( p _ { i } , p _ { j } ) + \gamma , \quad \alpha = 0 . 7 , \quad \gamma \sim \mathcal { U } ( - \varepsilon , + \varepsilon ) } \end{array} $$ Considering 1,855,701 pairs of theorems from the IMM project1, we compute correlations between statement similarities and respective proof similarities. In summary, BM25-based statement similarity shows a weak negative relationship with Levenshtein-based proof distance (Pearson $r = - 0 . 1 5 4$ , Spearman $\rho = - 0 . 1 7 1 \mathrm { \AA }$ ). In contrast, BM25-based proof similarity exhibits near-zero Pearson correlation $r = 0 . 0 2 9 ,$ ) and a small positive Spearman correlation $( \rho = 0 . 2 4 0 ) ,$ ) — in both cases, effectively negligible. To assess the issue of ineffective proof selection, we try to find such function $f ( s _ { i } , s _ { j } )$ that correlates with proof_distance $( \mathrm { p r o o f } _ { i }$ , proofj) stronger than statement similarity. In this work, we introduce a neural method that learns vector embeddings for Rocq theorem statements, training them so that the distance between any two vectors mirrors the similarity between the theorems’ proofs. # 2.1 Dataset mining Along with other ITPs, Rocq struggles with data scarcity. To assess this issue, we mine additional data from the Rocq code. We utilize Rocq system’s functionality, preprocess theorems, and transform sequential proof structures into trees. Fig. 1 illustrates an example of the process. Since every node in such tree is a valid state, we can automatically construct a proof for it, recursively iterating through its subtree edges. By extracting the statements with corresponding proofs, we can enlarge an arbitrary dataset of Rocq theorems roughly by a magnitude of 4. Dataset format and its details are described in Appendix B. We call the proposed tool BigRocq and make it publicly available as a standalone component of our system. The idea of mining additional training data from the intermediate states of the ITP is not new; Kogkalidis et al. [11] conducted analogous research for the Agda [12] language. Similar research for Rocq also takes place; however, some of those works are highly dependent on the deprecated ways of communication with Rocq’s compiler [30] and do not support up-to-date versions of Rocq. In contrast, others implement similar ideas as a part of the training pipeline and do not allow for seamless reuse. Using BigRocq, we mine a total of 76,524 statements, collected from 344 files from 4 big Rocq projects. Figure 1: Processing theorems into trees. $s _ { i }$ denotes a state # 2.2 Modeling In our work, we formulate the problem as a self-supervised contrastive representation learning problem and train a self-attentive embedder model [17]. Given the dataset of pairs [statementi, $\mathrm { p r o o f } _ { i } ]$ , and a similarity function, $\operatorname { f } ( \mathrm { p r o o f } _ { i } , \mathrm { p r o o f } _ { j } )$ ), defined between two proofs, we try to learn such function $r$ (ranker), that takes corresponding Rocq statements as inputs, but behaves as close as possible to $f$ . Given two statements, we learn to predict how similar their proofs shall be. In $\ S 4$ , we evaluate the performance of the proposed model on the following task. Given a statement $s _ { * }$ and a set of proven theorems, we want to choose $k$ premises and use them as context for generating a proof for $S$ . $$ \begin{array} { r l r } & { \mathrel { \phantom { = } } \{ ( p _ { i } , s _ { i } ) \} , \ S = \{ s _ { i } \} , \ r : { \mathcal T } \times S \to { \mathbb R } } & { \mathrm { T o p } _ { k } ( r , s _ { * } ) = \arg \mathrm { t o p } _ { k } g ( s _ { i } , s _ { * } ) \ } \\ & { } & { \qquad ( p _ { i } , s _ { i } ) { \in } { \mathcal T } } \\ & { \mathrm { S o l v e } ( r , s _ { * } ) = \mathrm { S o l v e } \bigl ( \mathrm { T o p } _ { k } ( r , s _ { * } ) , s _ { * } \bigr ) \in \{ 0 , 1 \} \qquad \quad Q ( r ) = { \mathbb E } _ { s _ { * } \sim { \mathcal D } } \big [ \mathrm { S o l v e } ( r , s _ { * } ) \big ] } \end{array} $$ Assume, without loss of generality, that by basic statement similarity we mean $B M 2 5 \$ -based similarity. As we have already shown in $\ S 2$ , text similarity is a bad choice of $r$ , which shows low correlation with the target function. However, it sets a strong baseline for our model. In practical applications, similar theorems occasionally have similar proofs. Accordingly, we have decided to fine-tune Microsoft’s 108-million-parameter encoder [6], originally pretrained on a combined corpus of programming and natural language texts. On a tiny extra test dataset, consisting of 50 theorems with corresponding hand-picked premises, raw CodeBert achieved an accuracy of $48 \%$ . This corresponds to roughly the same accuracy for ranking performed using the Jaccard-similarity metric on statements. We train the model using InfoNCE [19] loss. In particular, given the statement $s$ , on the dataset post-processing stage, we compute distances to other samples. We then mark a pair as positive if the distance between two proofs is less than a threshold $\tau _ { p o s }$ , and we mark it as negative if the distance is greater than $\tau _ { n e g }$ . Given the hyperparameter $k _ { \mathrm { n e g } }$ and sets of positive and negative pairs $P _ { s } ^ { + }$ and $P _ { s } ^ { - }$ we compute a per-statement loss term $\mathcal { L } _ { s }$ as follows: $$ \mathcal { L } _ { s } = - \log \frac { \exp \bigl ( \varphi ( z _ { s } , z _ { p } ) / T \bigr ) } { \exp \bigl ( \varphi ( z _ { s } , z _ { p } ) / T \bigr ) + \displaystyle \sum _ { j = 1 } ^ { k _ { \mathrm { n e g } } } \exp \bigl ( \varphi ( z _ { s } , z _ { n _ { j } } ) / T \bigr ) } \qquad \bigl ( p \in P _ { s } ^ { + } , \ n _ { j } \in P _ { s } ^ { - } \bigr ) $$ Figure 2: Agentic pipeline with RocqStar retriever. where $\varphi$ is a cosine similarity between $\ell _ { 2 }$ -normalized embeddings of statements. Experiments on the $k _ { \mathrm { n e g } }$ hyperparameter in our case showed little fluctuation in the results; however, $k _ { \mathrm { n e g } } = 4$ procured the smoothest convergence, which aligns well with research by Wu et al. [28]. Given the particular shape of the sample distance distribution, during training we experienced the problem of the model converging too quickly on “easy” negatives — pairs, whose proofs (and typically their statements) are already far apart in the raw distance space. To keep informative gradients flowing, we add hard negative pairs; with some probability we treat a pair of statements as negative if $\tau _ { \mathrm { h a r d n e g } } \leqslant \sin ( \mathrm { p r o o f } _ { a } , \mathrm { p r o o f } _ { b } ) \leqslant \tau _ { \mathrm { n e g } }$ . Introduction of negative samples helped to stabilize the training process; we have observed a less steep training curve and better generalization overall. Other training hyperparameters are listed in Appendix C. # 3 Agentic System Agent-based approaches are broadly used in code generation and repair tasks. Despite a large number of autonomous and semi-autonomous coding agents, they are not widely used in formal proofs generation and are not tailored to the Rocq specifics. To address this, we have implemented a RocqStar agentic system. To allow interaction between the agent and Rocq’s system, we develop a REST API server that provides a set of tools that are useful during the execution. We apply our domain knowledge and construct these tools to bring an agent-driven proving process as close as possible to a human-driven one. Example of allowed function calls include checking validity of proofs, retrieving the valid prefix of given proof, gathering additional information about available entities in the context, and interacting with the context via performing commands like Print ?a. to identify the type of an argument or Search ?exp. to search for defined terms by a pattern. Toolset is described in detail in Appendix D. Interaction with Rocq’s system is carried out through its language server, coq-lsp [5]. To conform with a commonly used Model Context Protocol (MCP) and allow seamless agent interaction with the environment through tools, we implement an MCP server that wraps the REST API server. In the provided tool set, the most important is a proof-checking tool. It not only returns the answer whether the proof in question is valid, but in case of an erroneous proof, returns the error itself, where in the proof it happened, and the valid prefix before the error along with the remaining goals after this prefix. Such functionality enables the agent to keep track of the current proof state and benefit from partial proof progress. # 3.1 Agent Logic The input to the agent is presented as a target theorem without a proof and a file where it was declared, see box $\bullet$ of Fig. 2. Agent’s pipeline is logically split into two main stages: planning and execution. In the planning phase, multiple language models rigorously work out the strategy for the further implementation. During execution agents follow the plan aiming to generate the correct proof. Planning Stage We use the idea of multi-agent debates to produce a plan of how the agent should prove the given theorem. Specifically, we make two LLMs argue with each other about the plan: one of the LLMs produces the initial plan and defends it (pro LLM), the other one makes arguments against this plan (con LLM), see box $\bullet$ of Fig. 2. After several rounds of debates, the whole message history is sent to the judge LLM, that decides who is the winner of the debate, and returns the final plan. With this procedure, we generate $k$ plans. We send them to the plan scoring LLM and prompt it to assign a numerical score to each plan (the higher the better). After that, we select $l$ plans with the highest score and send them to the Execution Stage, see box $\otimes$ of Fig. 2. Execution Stage For each of the selected plans, we run an executor agent that follows the strategy, iteratively invoking the tools from the provided tool set — proof checker, context–inspection queries, search commands, and so on, as atomic actions. By calling these tools, it interacts with the environment via the MCP server. We track how many erroneous proofs were checked in a row, and if this number is higher than a fixed threshold (we set the threshold to five during evaluation), we call a critic model to evaluate the current progress of the proof and find the deviations from the selected plan. After that, we retrieve theorems along with their proofs, whose top-level goals are similar to the currently remaining goal, according to the cosine similarity between their RocqStar-ranker embeddings. We prompt the LLM to explain which tactic sequences could be helpful to finish our proof. We gather the generated criticism and send it to the replanner LLM to refine the current plan along with similar proofs and their analysis. The replanner is a separate language model that revises the plan based on the critic’s feedback and the retrieved examples. The whole message history is sent back to the executor agent. During the execution of each plan, $n$ tool calls are allowed. If valid proof is not found after $n$ tool calls, we denote the plan as failed. In this case, we ask a plan failure summarizer LLM to generate a short explanation of why the strategy execution failed and what happened during it. Then this summarized explanation is sent to the new execution stage with the next selected plan. This procedure is repeated until the correct proof is found or there are no more strategies to execute. # 4 Evaluation To evaluate our approach, partially and as a whole, we use the CoqPilot benchmarking framework. We required a dataset with a large number of human-written theorems and proofs. To compare our solution to existing ones, we decided to re-use the dataset by Kozyrev et al. [13]. It is limited to 300 theorems from the IMM Project [21], which was suitable for us in terms of computational and financial costs. The theorems are partitioned into three groups, corresponding to the difficulty level. The length (in tactics) of the human-written reference proof of the theorem estimates its difficulty. The sizes of each group are chosen with respect to the initial distribution of proof lengths in the project. Final group sizes and length ranges of each group could be found in Table 2. From now on, we will refer to the described dataset as the IMM-300 dataset. For smaller ablation studies we additionally prepared IMM-50, a 50-theorem subset of IMM, constructed with the same procedure. No theorems from the dataset were present in the training set of the RocqStar ranker embedding model. Moreover, the training set only contained partial theorem goals, no initial statements. Split of both datasets into groups, details, and limitations are described in Appendix A. Computational and financial resources used for experiments are described in Appendix F. # 4.1 Retrieval Mechanism We integrate our retrieval mechanism as a ranker into CoqPilot and evaluate it on the IMM-300 dataset with different models under the hood. We compare the performance of our ranker with the baseline approach, which works in the following manner. Given a target theorem statement $^ { s _ { * } }$ and a set of proven theorems $\left[ s _ { 0 } , p _ { 0 } \right] , \ldots , \left[ s _ { n } , p _ { n } \right]$ , it ranks theorems in a descending order of $J ( s _ { * } , s _ { i } )$ , where $J ( s _ { * } , s _ { i } )$ is Jaccard-similarity index and $S _ { s _ { i } }$ is a set of tokens inside a statement. The statement is split into tokens by whitespaces, commas, etc. Jaccard-similarity index is semantically almost the same as the BM-25 metric and produces the same numerical results. For each theorem in the dataset, we take theorems within the same file, sort them using the ranker (Jaccard or RocqStar, respectively), take the $k$ most relevant ones $k$ is equal to 7 in our experiments), and send a request to the model to generate the completion. Chosen theorems are being sent as a few-shot prompt. Generation for each theorem is requested 12 times. If the Rocq’s system accepts any of the proofs, the theorem is considered solved. The target metric in our evaluation is the ratio of solved theorems. The evaluation results are presented in Table 1. Table 1: Model performance under different ablations across all evaluation sets. Table 2: Measuring the performance of different Rocq generation methods via CoqPilot As can be seen from Table 1, our proposed RocqStar ranker outperforms the baseline Jaccard ranker on almost all experiments, showing reliable improvement. Most of the performance increase could be seen in the second group; we interpret these results as follows. For short theorems in the first group, the assumption that similar statements imply similar proofs often holds; therefore, both rankers perform close to one another. For complex theorems from the third group, it rarely happens that two theorems have significantly similar proofs, resulting in less advancement space for the model. # 4.2 Agentic System We evaluate our agentic system on the IMM-300 dataset, pursuing the goal to solve as many theorems as possible. For all of the parts of the planning stage, we use the Claude 3.5 Sonnet model, performing two rounds of debates between actors. Four plans are generated, and two are chosen for further execution. During execution, 20 tool calls are allowed from the MCP server. Additionally, after five proof-checking calls, the critic model (Claude 3.7 Sonnet) is invoked and analyzes whether a deviation from the initial plan has occurred. We use Claude 3.5 Sonnet for the execution and re-planning, and Google Gemini Flash 2.0 for other tasks, due to the necessity of a big context. Results of the evaluation are shown in Table 2. As shown in Table 2, our agentic system outperforms other benchmarked models inside the CoqPilot framework. The strongest model so far was Claude 3.5 Sonnet, which achieves $51 \%$ accuracy on the dataset, given 12 retries for each theorem. RocqStar agent achieves $60 \%$ , showing vigorous improvement. In terms of financial costs, we estimate a run of an agent on one theorem at 1.3 US dollars, compared to 0.25 US dollars for 12 requests to the pure Claude 3.5 Sonnet in CoqPilot. Table 3: Ablation study of Multi-Agent Debate at planning stage # 4.3 Ablation study Considering that software-verification tasks cannot be solved ad hoc, without explicit planning, we conduct an ablation study that measures how removing the Multi-Agent Debate (MAD) layer and reverting to single-pass planning affects the proportion of theorems successfully proved. In this experiment, we leave all other modules of the system unchanged, including the Plan Scoring LLM. We run two versions of an agent; the first generates plans via MAD, and the other generates plans by a single request to an LLM without further refinement. We evaluate both agents on the IMM-50 dataset and depict the results in Table 3. The same table depicts the performance of the Claude 3.5 Sonnet model on the IMM-50 dataset with 12 retries and RocqStar ranker; this result is provided for reference. Results in Table 3 confirm a consistent advantage for MAD across all three complexity groups, with the most significant improvement observed on harder theorems. This trend highlights the importance of MAD in solving composite multi-stage problems, such as complex proof construction. We attach an example of how MAD refines the execution plan and fixes it to manage to solve a previously unsolved theorem in Appendix E. # 5 Related Work Many Rocq generation methods improve generation using Retrieval Augmentation. Most of those works solve the hint selection problem, described in $\ S 2$ . Those approaches build proofs tactic by tactic, retrieving relevant lemmas or definitions to use in the next step. The problem of searching for existing proofs that could advance the generation is barely described in the literature. CoqPilot [13] and Rango [26] pack the context for the generator model with theorems most similar to the one we are solving. Our work proposes a novel method of doing premise selection and shows improvement over the baseline from previous works [13, 26]. In our Multi-Agentic system, we distribute responsibility over different agents. Differentiating between models that handle natural reasoning and those that handle coding is common practice in agentic systems. The work of Li et al. [15] proposes a similar task force split into Thinker, Solver, Critic, and Debug agents. Liang et al. [16] introduces a Multi-Agent debate framework, shows that this approach encourages divergent thinking, and demonstrates its usability in complex reasoning tasks. We show that planning is essential for the formal verification pipeline. Theorem-proving demands a clear, high-level picture of the proof before executing any code. Running a multi-agent debate at the planning stage ensures rigorous evaluation of different approaches before interacting with Rocq’s system. We produce several plans for further execution. In a manner, close to Islam et al. [9], we assign scores to plans and run them in order of score decrease. To our knowledge, there were close to no attempts to building Agentic systems for ITPs. Yang et al. [31] have shown an initial proof of concept of an agent for Lean; however, their agent lacks automaticity, the pipeline incorporates only minimal tooling, and does not possess an explicit planning stage. As a user interface, we utilize CoqPilot to integrate into the common Rocq’s programmer pipeline. CoqPilot is a VSCode2 plugin, facilitating access to Rocq generation methods for end-users.
Interactive Theorem Proving was repeatedly shown to be fruitful combined with Generative Artificial Intelligence. This paper assesses multiple approaches to Rocq generation and illuminates potential avenues for improvement. We highlight the importance of thorough premise selection for generating Rocq proofs and propose a novel approach, leveraging retrieval via a self-attentive embedder model. The evaluation of the designed approach shows up to 28% relative increase of the generator's performance. We tackle the problem of writing Rocq proofs using a multi-stage agentic system, tailored for formal verification, and demonstrate its high effectiveness. We conduct an ablation study and show the use of multi-agent debate on the planning stage of proof synthesis.
[ "cs.LG", "cs.AI", "cs.LO", "cs.SE" ]
# 1 Introduction Text-to-image (T2I) latent diffusion models (LDMs) have significantly advanced the field of image generation (59; 66), showcasing remarkable fidelity and enhanced creative control in image editing (7; 10; 15; 28; 31). However, the efficacy of image editing is not uniform, as modifications affecting attributes with causal dependencies risk generating unrealistic and potentially misleading results if these underlying relationships are disregarded, a significant concern in data where causal interplays determine the imaging content (49; 54; 52). Recent efforts in video editing adapt T2I models to address the challenge of maintaining spatiotemporal consistency (16; 43; 72; 101; 102; 7; 15; 89). While some of these techniques rely on adapting pre-trained models through a training or fine-tuning process to achieve text-guided video editing (16; 43; 72; 101; 102), other zero-shot (7; 15) or one-shot (89) methods have focused on enabling text-driven video editing with a reduced reliance on extensive training. Moreover, recent work highlights that suboptimal prompts can adversely affect outcomes in both video generation (30; 5; 13) and video editing (28; 43), necessitating careful prompt consideration in these domains. This underscores the critical role of input text prompts in improving visual fidelity and semantic accuracy in video editing. Yet, in contrast to these developments, exploring “what-if” scenarios through counterfactual video generation has not been previously investigated. Causally faithful video counterfactuals via controlled causal interactions and optimized prompts, are essential for realistic video editing. Figure 1: Generated counterfactual results: We intervene on age (make the woman young) and gender (transform a woman to a man with a beard). Our method (3rd row) optimally steers the counterfactual generation process by causally tuning an initial target prompt achieving better results than w/o steering (2nd row). LDMs have demonstrated tremendous capabilities in image and video editing (24; 76); however, their intricate latent spaces present challenges for direct manipulation and effective control during the generation process (53; 38). Inspired by prompt optimization for black-box LDMs (18; 48), we posit that text prompt modification offers an implicit yet powerful way to steer generation towards effective, realistic counterfactual estimations. We hypothesize that optimizing causally consistent prompts is key to controlling causal consistency and achieving effective, realistic video counterfactuals. We approach video editing as the generation of video counterfactuals, viewing it as a specific instance of out-of-distribution (OOD) generation (55; 71; 65), where the goal is to modify specific attributes of a factual (source) video (e.g., transforming a woman into a man with a beard). To generate plausible and semantically meaningful video counterfactuals, we introduce a novel framework that integrates an assumed prior causal graph with a vision-language model (VLM) loss using textual differentiation (97) for optimized prompt-driven causal steering. Both the VLM and the LDM are treated as black boxes, allowing us to focus on their interaction without the need for explicit manipulation or particular knowledge of their internal workings. Figure 1 depicts how counterfactual estimations improve with a causally consistent prompt using text differentiation optimization (97). Our methodology addresses the challenge of explicitly controlling the high-dimensional latent space of LDMs to achieve specific, targeted modifications. By manipulating the input text prompt with causal guidance, our approach steers the LDM’s transformations during inference toward the desired counterfactual outcome. This process allows for human-controllable prompt tuning, enabling the generation of causally consistent counterfactuals. The VLM counterfactual loss-optimized text conditioning, directs the denoising process at each timestep, ensuring that the generated video frames align with the desired counterfactual change in a causally consistent mode, thus effectively controlling the generation of diverse counterfactuals. Consequently, the main focus of this work is to automate the creation of causally faithful, well-structured, and model-aligned textual prompts, steering the counterfactual transformations toward accurate and semantically meaningful OOD edits at inference. In summary, our contributions are: • We present a novel framework that allows steering diffusion-based video editing towards causal counterfactuals by propagating textual feedback from a VLM-based counterfactual loss through the LDM input prompt. • We improve the causal effectiveness of counterfactual estimations by tuning the input prompt, without requiring access to LDM internal mechanisms, while preserving video quality, minimality, and temporal consistency. • We demonstrate that causally faithful steering enables causally faithful counterfactual generation from LDM latent spaces. • We design VLM-based evaluation metrics to further assess the capacity of diffusion-based video editing frameworks for plausible counterfactual generation. # 2 Related Work Latent Diffusion-based Video Editing. LDMs (59; 66) have significantly advanced video generation and editing (8; 76). Tuning-based methods focus on either adapting text-to-image models (58) through cross-frame attention and one-shot tuning (101; 89; 43; 72; 16), or on fine-tuning text-tovideo models with multi-shot tuning (102). Controlled editing methods, like ControlNet (4), use priors such as optical flow (91; 23), depth maps (10), or pose information (47; 93) to enforce consistency. Training-free methods use diffusion features (77), latent fusion (61; 32), noise shuffling (31), or optical-flow guidance (6; 7; 92; 28). This paper investigates how prompt optimization, integrating text differentiation and causal priors, enables causal steering to generate effective counterfactuals that maintain minimality (49), video quality, and temporal consistency. Counterfactual Image and Video Generation. Visual counterfactual generation explore hypothetical “what-if” scenarios through targeted and semantically meaningful modifications to the input (84; 70). It is applied in counterfactual explainability (80; 3; 26; 27; 87; 57; 56; 73), robustness testing (9; 60; 41; 39; 95; 100; 87), and causal inference (55; 81; 82; 83; 54; 35; 90; 1; 68; 65; 69; 11; 75). While much work focuses on static images (51; 65; 49), the temporal coherence of causal counterfactual video generation remains underexplored (64). We integrate causal relationships and text differentiation-based prompt optimization into 3 different LDM methods via a VLM counterfactual loss, to generate effective video counterfactuals. Evaluation of Visual Editing and Counterfactuals. Evaluating counterfactuals is challenging (70; 49). While standard metrics assess image quality (36; 98; 86; 20) and semantic alignment (62), causal counterfactuals (49; 12; 17) require stricter criteria like causal effectiveness (51) and minimality (68). In video, evaluation is more complex due to the temporal consistency required. Existing video benchmarks (46; 96; 45; 25; 29; 76) overlook counterfactual reasoning. In addition, commonly used metrics in video generation such as DOVER (88), CLIP Score (62), and flow warping error (40) do not assess causal relationships. We evaluate the generated counterfactual videos both in terms of causal adherence through counterfactual effectiveness and minimality (51; 65; 49) and in terms of general video quality and temporal consistency. For minimality, we introduce a novel metric based on vision-language models. This comprehensive evaluation allows us to thoroughly assess causal adherence and the quality of counterfactual generation in text-guided video editing. # 3 Methodology # 3.1 Background and Preliminaries T2I LDMs for Video Editing. Recent text-guided video editing methods (89; 7; 15) employ pretrained T2I LDMs, typically Stable Diffusion (66), that operate on a latent image space. A pre-trained autoencoder $( \mathcal { E } , \mathcal { D } )$ (33; 79) maps an image frame $x$ to a latent code $z = \mathcal { E } ( x )$ , with $\mathcal { D } ( z ) \approx x$ . A conditional U-Net (67) denoiser $\epsilon _ { \theta }$ is trained to predict noise in the latent $z _ { t }$ at diffusion timestep $t$ , minimizing $\begin{array} { r } { \mathbb { E } _ { z , \epsilon \sim \mathcal { N } ( 0 , 1 ) , t , c } \Big [ \| \epsilon - \epsilon _ { \theta } \big ( z _ { t } , t , c \big ) \| _ { 2 } ^ { 2 } \Big ] } \end{array}$ , where $c$ is the embedding of text prompt $\mathcal { P }$ . The U-Net $\epsilon _ { \theta }$ can be either inflated into a 3D spatio-temporal network for one-shot video fine-tuning (89) and zero-shot optical-flow guidance (7), or directly used for frame editing, with temporal consistency imposed via feature propagation (15). These methods leverage deterministic DDIM (74) sampling and inversion which allows to encode (intermediate) noisy steps and reconstruct or edit the original video frames. Although each method has its own temporal regularization strategies and heuristics, given an input video $\nu$ and an editing prompt $\mathcal { P }$ , the core video editing process can be expressed as: Figure 2: VLM causal steering at a glance: The video editing system operates as a black-box (frozen) counterfactual generator and the (black-box) VLM as an evaluator of the generated counterfactuals. The VLM receives as input a generated counterfactual frame, the evaluation instruction, and the target counterfactual prompt $\mathcal { P }$ , and returns textual feedback, which is used to compute a “textual gradient” ∂L and optimize P. $$ \begin{array} { r } { \mathcal { V } ^ { \prime } = \mathcal { D } \big ( \mathtt { D D I M - s a m p l i n g } \big ( \mathtt { D D I M - i n v e r s i o n } ( \mathcal { E } ( \mathcal { V } ) ) , \mathcal { P } ) \big ) . } \end{array} $$ Within Pearl’s abduction–action–prediction causal paradigm (55), DDIM-inversion can be viewed as the abduction step, the action step applies the prompt-based intervention using editing prompt $\mathcal { P }$ , while DDIM-sampling carries out the prediction, yielding the final counterfactual edited video $\mathcal { V } ^ { \prime }$ . Video Editing Framework as a Counterfactual Generator. We treat the video editing framework as an opaque black-box system that performs counterfactual generation, as illustrated in Figure 2. In other words, we assume no access to the parameters of the LDM $\epsilon _ { \theta }$ –we cannot update $\theta$ or perform backpropagation– and no control over its internal mechanisms and operations such as DDIM sampling and inversion. More specifically, given any prompt-based video editing system $f$ , an input video $\nu$ and a counterfactual (editing) prompt $\mathcal { P }$ , Equation 1 simply becomes: $\mathcal { V } ^ { \prime } = f ( \mathcal { V } , \mathcal { P } )$ . Our framework is compatible with any black-box, text-guided diffusion video editing system. In our experiments, we evaluate it using three different diffusion-based video editing systems. As counterfactual prompt $\mathcal { P }$ can play a significant role in counterfactual video output $\mathcal { \hat { V } } ^ { \bar { \prime } }$ (18; 30; 28), we further refine and optimize $\mathcal { P }$ by using textual feedback from an external optimizer (97). # 3.2 VLM-based counterfactual loss for steering the video generation Suboptimal prompts can degrade video editing quality, making effective prompt refinement essential (18; 50; 48; 30; 5; 13; 28; 43) . While manual prompt engineering (44) or simple paraphrasing can help (14; 19), black-box prompt optimization approaches usually fine-tune an large language model (LLM) as a model-specific prompt interface for each T2I model (18; 50; 30; 5) whereas others explore the space of possible prompt paraphrases by iteratively updating in-context learning examples (48). To automate counterfactual generation for any text-guided video editing system, we employ TextGrad (97) which naturally allows prompt-level causal steering by optimizing counterfactual prompts according to an underlying causal graph. TextGrad leverages LLMs to generate natural-language “textual gradients” used for iterative refinement of complex systems through textual feedback. Building on this, we design a counterfactual “multimodal loss” using a VLM to guide the video generation towards the target interventions. The proposed framework is illustrated in Figure 2. Given a generated counterfactual video frame, the counterfactual prompt, and an evaluation instruction containing the target interventions, we implement our proposed “multimodal loss” using a VLM: $$ \mathcal { L } = V L M ( \mathcal { V } _ { f r a m e } ^ { \prime } , e \nu a l u a t i o n \ i n s t r u c t i o n , \mathcal { P } ) , $$ where the evaluation instruction 2 is a well-defined textual input to the VLM to suggest improvements on $\mathcal { P }$ based on how well the generated visual input $\mathcal { V } _ { f r a m e } ^ { \prime }$ (extracted from $\mathcal { V } ^ { \prime }$ ) aligns with the target counterfactual interventions. Given a predefined causal graph, we further augment the evaluation instruction with a causal decoupling textual input that instructs the VLM to ignore upstream variables when intervening on downstream ones. In this way, we simulate causal graph mutilation (55), which allows us to causally steer generation toward OOD counterfactuals. To optimize $\mathcal { P }$ , we employ Textual Gradient Descent (TGD) (97), which directly updates the prompt: $$ \begin{array} { r l } & { \mathcal { P } ^ { \prime } = \mathrm { T G D . s t e p } \left( \mathcal { P } , \frac { \partial \mathcal { L } } { \partial \mathcal { P } } \right) } \\ & { \quad \triangleq L L M \Big ( B e l o w a r e t h e c r i t i c i s m s o n / \mathcal { P } \} : \left\{ \frac { \partial \mathcal { L } } { \partial \mathcal { P } } \right\} } \\ & { \qquad I n c o r p o r a t e t h e c r i t i c i s m s , a n d p r o d u c e a n e w p r o m p t . \Big ) } \end{array} $$ where $\textstyle { \frac { \partial { \mathcal { L } } } { \partial { \mathcal { P } } } }$ denotes the “textual gradients” (97), passed through an LLM at each TGD update to generate a new prompt incorporating the VLM criticisms. Optimization halts when the target interventions are met or the maximum number of iterations is reached. A summary of the proposed approach is showcased in Algorithm 1. # Algorithm 1 Require: Counterfactual prompt $\mathcal { P }$ , Factual Video $\nu$ , DiffusionVideoEditor, VLM 1: prompt $ \mathcal { P }$ \triangleright Initialize prompt 2: optimizer $$ TGD(parameters $\ c =$ [prompt]) \triangleright Set up textual optimizer 3: for iter in maxIters do 4: $\mathcal { V } ^ { \prime } $ DiffusionVideoEditor( $\nu$ , prompt) \triangleright Counterfactual generation Eq. (1) 5: $l o s s \gets V L M ( \mathcal { V } _ { f r a m e } ^ { \prime }$ , evaluation instruction, prompt) \triangleright VLM evaluation Eq. (2) 6: if “no optimization is needed” $\in$ loss.value then 7: break 8: end if 9: loss.backward() \triangleright Computation of $\textstyle { \frac { \partial { \mathcal { L } } } { \partial { \mathcal { P } } } }$ 10: optimizer.step() \triangleright Update prompt via TGD Eq. (3P) 11: end for 12: return Final Counterfactual Video $\mathcal { V } ^ { \prime }$ # 3.3 VLMs for assessing causal effectiveness Effectiveness is key in counterfactual generation, indicating if the target intervention succeeded (12; 51; 49). CLIP-based metrics (63) lack interpretability and are inefficient for capturing causal alignment between text and image. Following (22), we use a VLM to assess effectiveness across a set of generated counterfactual videos with a visual question answering (VQA) approach. Given triplets $\{ Q _ { i } ^ { \alpha } , C _ { i } , V _ { f r a m e _ { i } } ^ { \prime } \} _ { i = 1 } ^ { N }$ , where $Q _ { i } ^ { \alpha }$ is a multiple-choice question about the intervened attribute $a$ , $C _ { i }$ is the correct answer extracted from the target counterfactual prompt, and $\mathcal { V } _ { f r a m e _ { i } } ^ { \prime }$ is a generated counterfactual video frame, we measure effectiveness by the accuracy of the VLM’s answer: $$ E f f e c t i v e n e s s ( \alpha ) = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \mathbb { 1 } \left[ V L M ( \mathcal { V } _ { f r a m e _ { i } } ^ { \prime } , Q _ { i } ^ { \alpha } ) = C _ { i } \right] . $$ # 3.4 VLMs for assessing minimality Minimal interventions (70; 68; 49) are considered a principal property for visual counterfactuals. In counterfactual generation a substantial challenge lies in incorporating the desired interventions (edits), while preserving unmodified other visual factors of variation which are not related to the assumed causal graph (51) – a challenge closely tied to identity preservation of the observation (factual) (65). We evaluate counterfactual minimality in the text domain, offering a more interpretable alternative to conventional image-space metrics (99). Specifically, we prompt a VLM to describe in detail both factual and counterfactual frames, excluding attributes associated with the assumed causal graph. We then embed the resulting descriptions using a BERT-based sentence transformer (85) and compute their cosine similarity in the semantic space. The overall metric can be expressed as follows: $\mathcal { P } _ { m i n } =$ "Describe this frame in detail, exclude causal graph variables" $M i n i m a l i t y ( { \mathcal V } _ { f r a m e } , { \mathcal V } _ { f r a m e } ^ { \prime } ) = c o s ( \tau _ { \phi } ( V L M ( { \mathcal V } _ { f r a m e } , { \mathcal P } _ { m i n } ) ) , \tau _ { \phi } ( V L M ( { \mathcal V } _ { f r a m e } ^ { \prime } , { \mathcal P } _ { m i n } ) ) ) ,$ (5) where $\tau _ { \phi } ( . )$ denotes the semantic encoder and $V _ { f r a m e } , V _ { f r a m e } ^ { \prime }$ the factual and counterfactual frames. # 4 Experiments and Results # 4.1 Experimental Setup Evaluation Dataset. In line with standard evaluation protocols in video editing (89; 15; 7; 43; 61; 37), we curated an evaluation dataset consisting of 67 text-video pairs sourced from the large-scale facial text-video dataset CelebV-Text (94), which contains in-the-wild video clips with diverse visual content. For each video, we extracted the first 24 frames, resizing them to a resolution of $5 1 2 \times 5 1 2$ . We assume that the datagenerating process of our evaluation dataset is adequately described by the causal graph shown in Figure 3 (49; 34). Additionally, we generated Figure 3: CelebV-Text causal graph. four edited prompts per video, corresponding to interventions on the attributes “age”, “gender”, “beard”, and “baldness”. These counterfactual prompts were either contributed by the authors or generated by ChatGPT, based on the causal relationships defined by the assumed causal graph. For each edited prompt, we constructed four multiple-choice questions, each targeting a variable from the causal graph. These questions serve to assess causal effectiveness as outlined in Section 3.3, utilizing the VLM. Further details about the evaluation dataset can be found in Appendix. Implementation Details. We demonstrate our approach with 3 different state-of-the-art diffusionbased video editing systems that adapt text-to-image (T2I) LDMs for video editing. FLATTEN (7) introduces optical flow-guided attention modules that incorporate latent diffusion features that share the same flow across frames. Tune-A-Video (89) fine-tunes a spatio-temporal attention module on a single text-video pair. Both methods extend 2D T2I LDMs to the video domain by inflating their architectures. In contrast, TokenFlow (15) first applies an image editing method to a set of keyframes and then propagates the edits to the remaining frames to ensure temporal consistency. We do not include methods based on cross-attention guided editing, such as Video-P2P (43) or FateZero (61), as these approaches require identical structure between the source and edited prompts. For a fair comparison, we use Stable Diffusion v2.1 (66) as the backbone model for all three methods. We adopt DDIM sampling with 50 steps with a classifier-free guidance (21) scale 4.5 for Tune-A-Video and TokenFlow, and 7.5 for FLATTEN. To implement the VLM counterfactual loss (Equation 2) , we utilize the multimodal GPT-4o model and employ TextGrad (97) to propagate textual gradients for 2 TGD iterations. For the VLM effectiveness metric (Equation 4) we use the LLaVA-NeXT (42), and the GPT-4o (2) for the VLM minimality metric (Equation 5) . GPT-4o is capable of generating descriptions that effectively exclude references to causal graph variables, making it more suitable for implementing our minimality metric. All experiments were conducted on a single A100 GPU. # 4.2 Results # 4.2.1 Quantitative Evaluation We thoroughly assess the generated counterfactual videos utilizing metrics that capture key axiomatic properties of counterfactuals (12; 17). From a causal perspective, we evaluate the produced video counterfactuals in terms of effectiveness (51; 49) and minimality (49; 68). Moreover, we employ general video quality metrics including DOVER (88) and FVD (78) to assess visual fidelity and realism, as well as the CLIP (63) score between adjacent frames to evaluate temporal consistency. We compare our method with both vanilla video editing methods, which use the initial counterfactual prompts from our evaluation dataset and a naive LLM-based paraphrasing baseline, in which an LLM takes the initial target counterfactual prompt as input and is instructed to rephrase it using more expressive language without having access to the counterfactual video. In addition, we report results for our approach both with and without augmenting the VLM evaluation instruction using the causal decoupling prompt (VLM loss w/o causal dec, VLM loss w/ causal dec), as described in Section 3.2. Comparison of vanilla editing systems. From Table 1, and specifically looking at the initial prompt rows, we observe that TokenFlow achieves the best balance between causal effectiveness and minimality compared to the other two baseline methods. Tune-A-Video generates reasonably effective counterfactuals but struggles with minimality, showing the worst values across all three frameworks in both LPIPS and VLM-based minimality metrics. Regarding overall video quality and temporal consistency, TokenFlow and FLATTEN outperform Tune-A-Video, indicating superior performance in maintaining visual coherence. Effectiveness. To quantify counterfactual effectiveness, we leverage VLMs by prompting them with multiple-choice questions regarding the intervened variables in the causal graph, as described in Section 4.1. Each initial counterfactual (target) prompt is associated with four questions (age, gender, beard, bald). Table 1 presents the VLM accuracy on all multiple-choice questions related to each causal graph variable on the evaluation dataset, under interventions on age, gender, beard, and bald. We observe that our method improves counterfactual effectiveness across all baseline editing frameworks. The proposed VLM-based counterfactual loss effectively guides the generation process by encouraging the latent diffusion backbone model to focus on the unsuccessful interventions. Specifically, the best scores are obtained when the evaluation instruction provided to the VLM is augmented with the causal decoupling prompt (VLM loss w/ causal dec), indicating that the generation is effectively steered toward counterfactual videos that break strong causal relationships (e.g., adding a beard or baldness to females). Comparing our approach to the LLM paraphrasing scenario shows that naive LLM-based rephrasing can improve the performance of vanilla FLATTEN and TokenFlow on gender interventions, in all other cases the LLM-generated prompts fail to effectively guide the generation process. This may be attributed to hallucinations or irrelevant content introduced during paraphrasing, which the diffusion backbone is unable to handle appropriately. Minimality. To measure the minimality of the interventions we utilize LPIPS (99) and the VLMbased metric as described in Section 3.4. Our method reveals the interplay between preserving proximity (as described in Section 3.4) to the factual video and adhering to the counterfactual text conditioning. From Table 1, we observe that the LPIPS metric tends to increase as the counterfactual edits become more effective. In addition, when measuring minimality using the proposed VLM-based metric, we observe a similar trend, as the cosine similarity of the transformer semantic embeddings is decreased slightly. Nevertheless, the deviations from baseline frameworks remain marginal. Overall, we can conclude that our approach is capable of achieving minimality scores comparable to those of baseline methods, thereby ensuring a reasonable balance between effectiveness and minimality. Video Quality and Temporal Consistency. Table 1 presents quantitative results for general video quality (DOVER, FVD) and temporal consistency (CLIP (63) score). We observe that DOVER score (88), which assesses generated counterfactual videos from both technical and aesthetic perspectives, shows only marginal differences between the baseline methods and our VLM-steering approach. In addition, the FVD (78) score shows a slight increase for our proposed method, indicating that as the counterfactuals become more effective, they deviate more noticeably from the observational distribution. Lastly, the deviations in CLIP scores for temporal consistency are minimal compared to the vanilla methods. Overall, we conclude that the measurements of general video quality and temporal consistency metrics indicate no significant deviation from the baselines, demonstrating that our method improves counterfactual effectiveness without compromising other critical factors such as video realism and temporal coherence. Table 1: Counterfactual Evaluation: Effectiveness, Minimality, Video Quality & Temporal Consistency. # 4.2.2 Qualitative Evaluation Figures 4 and 5 present qualitative results of our method across all three video editing systems: FLATTEN (7), Tune-A-Video (89), and TokenFlow (15). The first row depicts the factual video, while the remaining rows show counterfactuals produced using the initial prompt, the LLM-paraphrased prompt, and our causally optimized prompt. We observe that our approach effectively generates counterfactual videos that faithfully incorporate the desired interventions. Specifically, we can derive a broad range of counterfactuals – from edits that break strong causal relationships (e.g., adding a beard to a woman in Figures 4, 5) to age transformations (e.g., making an older man appear as a child in Figure 5) and gender transformations (e.g., making a man appear as a woman in Figure 5). Furthermore, the results highlight the superiority of our proposed VLM causal steering compared to the naive prompt paraphrasing by an LLM. Figure 6 illustrates the progression of the textual optimization process in the proposed method. We present results of our approach with the FLATTEN framework, where the influence of textual gradient steps (2nd row) is particularly evident. Through controllable iterative refinements, the generation is gradually guided toward the intended intervention (a youthful appearance). The results demonstrate that our method effectively produces the desired counterfactual transformation, highlighting the controllability of our causal steering approach. # 5 Discussion and Limitations We introduced a novel framework for causally steering text-guided diffusion-based video editing systems toward generating causally faithful video counterfactuals. Grounded in the insight that causal counterfactuals lie within the latent space of diffusion models (66), our approach leverages textual evaluative feedback from a vision-language model to iteratively refine the input prompt. Our optimization strategy provides a principled method for guiding counterfactual generation, significantly improving causal alignment without compromising visual realism, minimality, or temporal coherence. Our results demonstrate the effectiveness and controllability of the proposed method, highlighting its potential to advance causal reasoning capabilities in diffusion-based generative models. Limitations. We do not particularly add any loss to enforce temporal consistency beyond what each baseline method does. It is quite possible that static interventions on the attributes could alter temporal consistency but we haven’t observed it in our case. In video editing, the ability to manipulate temporal attributes such as actions or dynamic scenes is crucial. Constructing such graphs and datasets are necessary to develop and test such methods and are left for future work. Broader Impact. Our method for generating causally faithful video counterfactuals enhances video synthesis, interpretable AI, and content manipulation by providing better controllable edits. This could improve automated content generation in fields like healthcare (e.g., simulating treatment outcomes or disease progression under varied causal conditions), education (e.g., allowing students to observe video counterfactuals of complex processes, such as surgical procedures or engineering designs), and digital media (e.g., enabling creative content manipulation). Furthermore, it can potentially address ethical concerns, regarding thoroughly evaluating the misuse of deepfake technologies, highlighting the need for responsible guidelines and safeguards. Figure 4: Qualitative results: Generated counterfactual videos illustrate the positive effect of our VLM-based causal steering (bottom row) when applied to recent video editing systems (FLATTEN (7), Tune-A-Video (89), and TokenFlow (15)). First panel: intervention on beard (adding a beard to a woman). Second panel: intervention on beard (removing a beard from a man). Third panel: intervention on age (aging a woman). Figure 5: Qualitative results (continuation of Figure 4): First panel: intervention on beard (adding a beard to a woman). Second panel: intervention on age (making an older man appear young). Third panel: intervention on gender (transforming a man into a woman). The accuracy of the edits in the bottom row highlights the effectiveness of our proposed method. Figure 6: Progressive counterfactual transformation of an elderly woman into a young woman (top row) through two iterative Textual Gradient Descent (TGD) steps (97) in the bottom row produced by our proposed causal steering with the FLATTEN (7) editing method. # Acknowledgments and Disclosure of Funding This work has been partially supported by project MIS 5154714 of the National Recovery and Resilience Plan Greece 2.0 funded by the European Union under the NextGenerationEU Program. S.A. Tsaftaris acknowledges support from the Royal Academy of Engineering and the Research Chairs and Senior Research Fellowships scheme (grant RCSRF1819/8/25), and the UK’s Engineering and Physical Sciences Research Council (EPSRC) support via grant EP/X017680/1, and the UKRI AI programme and EPSRC, for CHAI - EPSRC AI Hub for Causality in Healthcare AI with Real Data [grant number EP/Y028856/1]. Hardware resources were granted with the support of GRNET. References [1] Ahmed Abdulaal, Daniel C Castro, and Daniel C Alexander. Deep structural causal modelling of the clinical and radiological phenotype of alzheimer’s disease. In NeurIPS 2022 workshop on causality for real-world impact, 2022. [2] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. [3] Maximilian Augustin, Valentyn Boreiko, Francesco Croce, and Matthias Hein. Diffusion visual counterfactual explanations. Advances in Neural Information Processing Systems, 35: 364–377, 2022. [4] Weifeng Chen, Yatai Ji, Jie Wu, Hefeng Wu, Pan Xie, Jiashi Li, Xin Xia, Xuefeng Xiao, and Liang Lin. Control-a-video: Controllable text-to-video generation with diffusion models. arXiv e-prints, pages arXiv–2305, 2023. [5] Jiale Cheng, Ruiliang Lyu, Xiaotao Gu, Xiao Liu, Jiazheng Xu, Yida Lu, Jiayan Teng, Zhuoyi Yang, Yuxiao Dong, Jie Tang, et al. Vpo: Aligning text-to-video generation models with prompt optimization. arXiv preprint arXiv:2503.20491, 2025. [6] Ernie Chu, Tzuhsuan Huang, Shuo-Yen Lin, and Jun-Cheng Chen. Medm: Mediating image diffusion models for video-to-video translation with temporal correspondence guidance. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 1353–1361, 2024. [7] Yuren Cong, Mengmeng Xu, christian simon, Shoufa Chen, Jiawei Ren, Yanping Xie, JuanManuel Perez-Rua, Bodo Rosenhahn, Tao Xiang, and Sen He. FLATTEN: optical FLow-guided ATTENtion for consistent text-to-video editing. In The Twelfth International Conference on Learning Representations, 2024. [8] Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah. Diffusion models in vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9):10850–10869, 2023. [9] Saloni Dash, Vineeth N Balasubramanian, and Amit Sharma. Evaluating and mitigating bias in image classifiers: A causal perspective using counterfactuals. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 915–924, 2022. [10] Ruoyu Feng, Wenming Weng, Yanhui Wang, Yuhui Yuan, Jianmin Bao, Chong Luo, Zhibo Chen, and Baining Guo. Ccedit: Creative and controllable video editing via diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6712–6722, 2024. [11] Alessandro Fontanella, Grant Mair, Joanna Wardlaw, Emanuele Trucco, and Amos Storkey. Diffusion models for counterfactual generation and anomaly detection in brain images. IEEE Transactions on Medical Imaging, 2024. [12] David Galles and Judea Pearl. An axiomatic characterization of causal counterfactuals. Foundations of Science, 3:151–182, 1998. [13] Bingjie Gao, Xinyu Gao, Xiaoxue Wu, Yujie Zhou, Yu Qiao, Li Niu, Xinyuan Chen, and Yaohui Wang. The devil is in the prompts: Retrieval-augmented prompt optimization for text-to-video generation. arXiv preprint arXiv:2504.11739, 2025. [14] Tianyu Gao, Adam Fisch, and Danqi Chen. Making pre-trained language models better fewshot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816–3830, 2021. [15] Michal Geyer, Omer Bar-Tal, Shai Bagon, and Tali Dekel. Tokenflow: Consistent diffusion features for consistent video editing. In The Twelfth International Conference on Learning Representations, 2024. [16] Yuchao Gu, Yipin Zhou, Bichen Wu, Licheng Yu, Jia-Wei Liu, Rui Zhao, Jay Zhangjie Wu, David Junhao Zhang, Mike Zheng Shou, and Kevin Tang. Videoswap: Customized video subject swapping with interactive semantic point correspondence. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7621–7630, 2024. [17] Joseph Y Halpern. Axiomatizing causal reasoning. Journal of Artificial Intelligence Research, 12:317–337, 2000. [18] Yaru Hao, Zewen Chi, Li Dong, and Furu Wei. Optimizing prompts for text-to-image generation. Advances in Neural Information Processing Systems, 36:66923–66939, 2023. [19] Adi Haviv, Jonathan Berant, and Amir Globerson. Bertese: Learning to speak to bert. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3618–3623, 2021. [20] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. [21] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021. [22] Yushi Hu, Benlin Liu, Jungo Kasai, Yizhong Wang, Mari Ostendorf, Ranjay Krishna, and Noah A Smith. Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 20406–20417, 2023. [23] Zhihao Hu and Dong Xu. Videocontrolnet: A motion-guided video-to-video translation framework by using diffusion model with controlnet. arXiv preprint arXiv:2307.14073, 2023. [24] Yi Huang, Jiancheng Huang, Yifan Liu, Mingfu Yan, Jiaxi Lv, Jianzhuang Liu, Wei Xiong, He Zhang, Liangliang Cao, and Shifeng Chen. Diffusion model-based image editing: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025. [25] Ziqi Huang, Yinan He, Jiashuo Yu, Fan Zhang, Chenyang Si, Yuming Jiang, Yuanhan Zhang, Tianxing Wu, Qingyang Jin, Nattapol Chanpaisit, et al. Vbench: Comprehensive benchmark suite for video generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21807–21818, 2024. [26] Guillaume Jeanneret, Loïc Simon, and Frédéric Jurie. Diffusion models for counterfactual explanations. In Proceedings of the Asian conference on computer vision, pages 858–876, 2022. [27] Guillaume Jeanneret, Loïc Simon, and Frédéric Jurie. Adversarial counterfactual visual explanations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16425–16435, 2023. [28] Hyeonho Jeong and Jong Chul Ye. Ground-a-video: Zero-shot grounded video editing using text-to-image diffusion models. In The Twelfth International Conference on Learning Representations, 2024. [29] Pengliang Ji, Chuyang Xiao, Huilin Tai, and Mingxiao Huo. T2vbench: Benchmarking temporal dynamics for text-to-video generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5325–5335, 2024. [30] Yatai Ji, Jiacheng Zhang, Jie Wu, Shilong Zhang, Shoufa Chen, Chongjian GE, Peize Sun, Weifeng Chen, Wenqi Shao, Xuefeng Xiao, et al. Prompt-a-video: Prompt your video diffusion model via preference-aligned llm. arXiv preprint arXiv:2412.15156, 2024. [31] Ozgur Kara, Bariscan Kurtkaya, Hidir Yesiltepe, James M Rehg, and Pinar Yanardag. Rave: Randomized noise shuffling for fast and consistent video editing with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6507–6516, 2024. [32] Anant Khandelwal. Infusion: Inject and attention fusion for multi concept zero-shot text-based video editing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3017–3026, 2023. [33] Diederik P Kingma, Max Welling, et al. Auto-encoding variational bayes, 2013. [34] Klaus-Rudolf Kladny, Julius von Kügelgen, Bernhard Schölkopf, and Michael Muehlebach. Deep backtracking counterfactuals for causally compliant explanations. arXiv preprint arXiv:2310.07665, 2023. [35] Murat Kocaoglu, Christopher Snyder, Alexandros G Dimakis, and Sriram Vishwanath. Causalgan: Learning causal implicit generative models with adversarial training. In International Conference on Learning Representations, 2018. [36] Jari Korhonen and Junyong You. Peak signal-to-noise ratio revisited: Is simple beautiful? In 2012 Fourth international workshop on quality of multimedia experience, pages 37–38. IEEE, 2012. [37] Max Ku, Cong Wei, Weiming Ren, Huan Yang, and Wenhu Chen. Anyv2v: A tuning-free framework for any video-to-video editing tasks. Transactions on Machine Learning Research, 2024. ISSN 2835-8856. [38] Mingi Kwon, Jaeseok Jeong, and Youngjung Uh. Diffusion models already have a semantic latent space. arXiv preprint arXiv:2210.10960, 2022. [39] Chengen Lai, Shengli Song, Sitong Yan, and Guangneng Hu. Improving vision and language concepts understanding with multimodal counterfactual samples. In European Conference on Computer Vision, pages 174–191. Springer, 2024. [40] Wei-Sheng Lai, Jia-Bin Huang, Oliver Wang, Eli Shechtman, Ersin Yumer, and Ming-Hsuan Yang. Learning blind video temporal consistency. In Proceedings of the European conference on computer vision (ECCV), pages 170–185, 2018. [41] Tiep Le, Vasudev Lal, and Phillip Howard. Coco-counterfactuals: Automatically constructed counterfactual examples for image-text pairs. Advances in Neural Information Processing Systems, 36:71195–71221, 2023. [42] Bo Li, Kaichen Zhang, Hao Zhang, Dong Guo, Renrui Zhang, Feng Li, Yuanhan Zhang, Ziwei Liu, and Chunyuan Li. Llava-next: Stronger llms supercharge multimodal capabilities in the wild. 2024. [43] Shaoteng Liu, Yuechen Zhang, Wenbo Li, Zhe Lin, and Jiaya Jia. Video-p2p: Video editing with cross-attention control. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8599–8608, 2024. [44] Vivian Liu and Lydia B Chilton. Design guidelines for prompt engineering text-to-image generative models. In Proceedings of the 2022 CHI conference on human factors in computing systems, pages 1–23, 2022. [45] Yaofang Liu, Xiaodong Cun, Xuebo Liu, Xintao Wang, Yong Zhang, Haoxin Chen, Yang Liu, Tieyong Zeng, Raymond Chan, and Ying Shan. Evalcrafter: Benchmarking and evaluating large video generation models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22139–22149, 2024. [46] Yuanxin Liu, Lei Li, Shuhuai Ren, Rundong Gao, Shicheng Li, Sishuo Chen, Xu Sun, and Lu Hou. Fetv: A benchmark for fine-grained evaluation of open-domain text-to-video generation. Advances in Neural Information Processing Systems, 36:62352–62387, 2023. [47] Yue Ma, Yingqing He, Xiaodong Cun, Xintao Wang, Siran Chen, Xiu Li, and Qifeng Chen. Follow your pose: Pose-guided text-to-video generation using pose-free videos. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 4117–4125, 2024. [48] Oscar Mañas, Pietro Astolfi, Melissa Hall, Candace Ross, Jack Urbanek, Adina Williams, Aishwarya Agrawal, Adriana Romero-Soriano, and Michal Drozdzal. Improving text-to-image consistency via automatic prompt optimization. Transactions on Machine Learning Research, 2024. ISSN 2835-8856. [49] Thomas Melistas, Nikos Spyrou, Nefeli Gkouti, Pedro Sanchez, Athanasios Vlontzos, Yannis Panagakis, Giorgos Papanastasiou, and Sotirios Tsaftaris. Benchmarking counterfactual image generation. Advances in Neural Information Processing Systems, 37:133207–133230, 2024. [50] Wenyi Mo, Tianyu Zhang, Yalong Bai, Bing Su, Ji-Rong Wen, and Qing Yang. Dynamic prompt optimizing for text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26627–26636, 2024. [51] Miguel Monteiro, Fabio De Sousa Ribeiro, Nick Pawlowski, Daniel C Castro, and Ben Glocker. Measuring axiomatic soundness of counterfactual image models. In The Eleventh International Conference on Learning Representations, 2023. [52] Giorgos Papanastasiou, Pedro P Sanchez, Argyrios Christodoulidis, Guang Yang, and Walter Hugo Lopez Pinaya. Confounder-aware foundation modeling for accurate phenotype profiling in cell imaging. bioRxiv, pages 2024–12, 2024. [53] Yong-Hyun Park, Mingi Kwon, Jaewoong Choi, Junghyo Jo, and Youngjung Uh. Understanding the latent space of diffusion models through the lens of riemannian geometry. Advances in Neural Information Processing Systems, 36:24129–24142, 2023. [54] Nick Pawlowski, Daniel Coelho de Castro, and Ben Glocker. Deep structural causal models for tractable counterfactual inference. Advances in neural information processing systems, 33: 857–869, 2020. [55] Judea Pearl. Causality. Cambridge university press, 2009. [56] Paraskevas Pegios, Aasa Feragen, Andreas Abildtrup Hansen, and Georgios Arvanitidis. Counterfactual explanations via riemannian latent space traversal. arXiv preprint arXiv:2411.02259, 2024. [57] Paraskevas Pegios, Manxi Lin, Nina Weng, Morten Bo Søndergaard Svendsen, Zahra Bashir, Siavash Bigdeli, Anders Nymark Christensen, Martin Tolsgaard, and Aasa Feragen. Diffusionbased iterative counterfactual explanations for fetal ultrasound image quality assessment. arXiv preprint arXiv:2403.08700, 2024. [58] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023. [59] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. SDXL: Improving latent diffusion models for high-resolution image synthesis. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id $\ c =$ di52zR8xgf. [60] Viraj Prabhu, Sriram Yenamandra, Prithvijit Chattopadhyay, and Judy Hoffman. Lance: Stresstesting visual models by generating language-guided counterfactual images. Advances in Neural Information Processing Systems, 36:25165–25184, 2023. [61] Chenyang Qi, Xiaodong Cun, Yong Zhang, Chenyang Lei, Xintao Wang, Ying Shan, and Qifeng Chen. Fatezero: Fusing attentions for zero-shot text-based video editing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15932–15942, 2023. [62] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PmLR, 2021. [63] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PmLR, 2021. [64] Hadrien Reynaud, Athanasios Vlontzos, Mischa Dombrowski, Ciarán Gilligan Lee, Arian Beqiri, Paul Leeson, and Bernhard Kainz. D’artagnan: Counterfactual video generation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 599–609. Springer, 2022. [65] Fabio De Sousa Ribeiro, Tian Xia, Miguel Monteiro, Nick Pawlowski, and Ben Glocker. High fidelity image counterfactuals with probabilistic causal models. In International Conference on Machine Learning, pages 7390–7425. PMLR, 2023. [66] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022. [67] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234–241. Springer, 2015. [68] Pedro Sanchez and Sotirios A Tsaftaris. Diffusion causal models for counterfactual estimation. In Conference on Causal Learning and Reasoning, pages 647–668. PMLR, 2022. [69] Pedro Sanchez, Antanas Kascenas, Xiao Liu, Alison Q O’Neil, and Sotirios A Tsaftaris. What is healthy? generative counterfactual diffusion for lesion localization. In MICCAI Workshop on Deep Generative Models, pages 34–44. Springer, 2022. [70] Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. Toward causal representation learning. Proceedings of the IEEE, 109(5):612–634, 2021. [71] Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. Toward causal representation learning. Proceedings of the IEEE, 109(5):612–634, 2021. [72] Chaehun Shin, Heeseung Kim, Che Hyun Lee, Sang-gil Lee, and Sungroh Yoon. Edit-a-video: Single video editing with object-aware consistency. In Asian Conference on Machine Learning, pages 1215–1230. PMLR, 2024. [73] Bartlomiej Sobieski, Jakub Grzywaczewski, Bartłomiej Sadlej, Matthew Tivnan, and Przemyslaw Biecek. Rethinking visual counterfactual explanations through region constraint. In The Thirteenth International Conference on Learning Representations, 2025. [74] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, 2021. [75] Xue Song, Jiequan Cui, Hanwang Zhang, Jingjing Chen, Richang Hong, and Yu-Gang Jiang. Doubly abductive counterfactual inference for text-based image editing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9162–9171, 2024. [76] Wenhao Sun, Rong-Cheng Tu, Jingyi Liao, and Dacheng Tao. Diffusion model-based video editing: A survey. arXiv preprint arXiv:2407.07111, 2024. [77] Luming Tang, Menglin Jia, Qianqian Wang, Cheng Perng Phoo, and Bharath Hariharan. Emergent correspondence from image diffusion. Advances in Neural Information Processing Systems, 36:1363–1389, 2023. [78] Thomas Unterthiner, Sjoerd Van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. arXiv preprint arXiv:1812.01717, 2018. [79] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. [80] Sahil Verma, Varich Boonsanong, Minh Hoang, Keegan Hines, John Dickerson, and Chirag Shah. Counterfactual explanations and algorithmic recourses for machine learning: A review. ACM Computing Surveys, 56(12):1–42, 2024. [81] Athanasios Vlontzos, Daniel Rueckert, Bernhard Kainz, et al. A review of causality for learning algorithms in medical image analysis. Machine Learning for Biomedical Imaging, 1 (November 2022 issue):1–17, 2022. [82] Athanasios Vlontzos, Bernhard Kainz, and Ciarán M Gilligan-Lee. Estimating categorical counterfactuals via deep twin networks. Nature Machine Intelligence, 5(2):159–168, 2023. [83] Athanasios Vlontzos, Christine Müller, and Bernhard Kainz. Chapter 17 - causal reasoning in medical imaging. In Marco Lorenzi and Maria A. Zuluaga, editors, Trustworthy AI in Medical Imaging, The MICCAI Society book Series, pages 367–381. Academic Press, 2025. ISBN 978-0-443-23761-4. doi: https://doi.org/10.1016/B978-0-44-323761-4.00029-8. URL https://www.sciencedirect.com/science/article/pii/B9780443237614000298. [84] Sandra Wachter, Brent Mittelstadt, and Chris Russell. Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech., 31:841, 2017. [85] Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. Advances in neural information processing systems, 33:5776–5788, 2020. [86] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4): 600–612, 2004. [87] Nina Weng, Paraskevas Pegios, Eike Petersen, Aasa Feragen, and Siavash Bigdeli. Fast diffusion-based counterfactuals for shortcut removal and generation. In European Conference on Computer Vision, pages 338–357. Springer, 2024. [88] Haoning Wu, Erli Zhang, Liang Liao, Chaofeng Chen, Jingwen Hou, Annan Wang, Wenxiu Sun, Qiong Yan, and Weisi Lin. Exploring video quality assessment on user generated contents from aesthetic and technical perspectives. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 20144–20154, 2023. [89] Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Stan Weixian Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan, Xiaohu Qie, and Mike Zheng Shou. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7623–7633, 2023. [90] Tian Xia, Agisilaos Chartsias, Chengjia Wang, Sotirios A Tsaftaris, Alzheimer’s Disease Neuroimaging Initiative, et al. Learning to synthesise the ageing brain without longitudinal data. Medical Image Analysis, 73:102169, 2021. [91] Shuai Yang, Yifan Zhou, Ziwei Liu, and Chen Change Loy. Rerender a video: Zero-shot text-guided video-to-video translation. In SIGGRAPH Asia 2023 Conference Papers, pages 1–11, 2023. [92] Shuai Yang, Yifan Zhou, Ziwei Liu, and Chen Change Loy. Fresco: Spatial-temporal correspondence for zero-shot video translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8703–8712, 2024. [93] Xiangpeng Yang, Linchao Zhu, Hehe Fan, and Yi Yang. Videograin: Modulating space-time attention for multi-grained video editing. In The Thirteenth International Conference on Learning Representations, 2025. [94] Jianhui Yu, Hao Zhu, Liming Jiang, Chen Change Loy, Weidong Cai, and Wayne Wu. Celebvtext: A large-scale facial text-video dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14805–14814, 2023. [95] Zhihan Yu and Ruifan Li. Revisiting counterfactual problems in referring expression comprehension. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13438–13448, 2024. [96] Shenghai Yuan, Jinfa Huang, Yongqi Xu, Yaoyang Liu, Shaofeng Zhang, Yujun Shi, Rui-Jie Zhu, Xinhua Cheng, Jiebo Luo, and Li Yuan. Chronomagic-bench: A benchmark for metamorphic evaluation of text-to-time-lapse video generation. Advances in Neural Information Processing Systems, 37:21236–21270, 2024. [97] Mert Yuksekgonul, Federico Bianchi, Joseph Boen, Sheng Liu, Pan Lu, Zhi Huang, Carlos Guestrin, and James Zou. Optimizing generative ai by backpropagating language model feedback. Nature, 639(8055):609–616, 2025. [98] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586–595, 2018. [99] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586–595, 2018. [100] Yifeng Zhang, Ming Jiang, and Qi Zhao. Learning chain of counterfactual thought for bias-robust vision-language reasoning. In European Conference on Computer Vision, pages 334–351. Springer, 2024. [101] Zicheng Zhang, Bonan Li, Xuecheng Nie, Congying Han, Tiande Guo, and Luoqi Liu. Towards consistent video editing with text-to-image diffusion models. Advances in Neural Information Processing Systems, 36:58508–58519, 2023. [102] Rui Zhao, Yuchao Gu, Jay Zhangjie Wu, David Junhao Zhang, Jia-Wei Liu, Weijia Wu, Jussi Keppo, and Mike Zheng Shou. Motiondirector: Motion customization of text-to-video diffusion models. In European Conference on Computer Vision, pages 273–290. Springer, 2024. # A Summary The appendix includes a description of the dataset construction process (Appendix B) and the detailed structure of the evaluation instruction used in the VLM-based loss (Appendix D). We also outline the optimization procedure for improving prompt quality using textual feedback and textual gradients $\big ( \frac { \bar { \partial } \mathcal { L } } { \partial \mathcal { P } } \big )$ provided by the VLM-based loss (Appendix E). Furthermore, we present the VLM evaluation pipelines used to assess effectiveness and minimality (Appendix F). Lastly, we include qualitative results (Appendix G) that further demonstrate the capabilities of our proposed approach. # B Evaluation Dataset We curated an evaluation dataset consisting of 67 text-video pairs sourced from the large-scale facial text–video dataset CelebV-Text (94). We extracted the first 24 frames from each video and resized them to a resolution of $5 1 2 \times 5 1 2$ . Each video in CelebV-Text is associated with a text prompt describing static appearance attributes. We model the data-generating process using the causal graph shown in Figure 7. Given the factual (original) text prompt for each video, sourced from CelebVText (94), we derive four counterfactual (target) prompts that are as similar as possible to the factual prompt, differing only in the specified interventions. To produce the counterfactual prompts and incorporate the interventions, we follow the assumed causal relationships depicted in the causal graph (Figure 7)–for example, older men are more likely to have a beard or be bald than younger men, while women typically do not exhibit facial hair or baldness. Therefore, we successfully construct causal prompts. The counterfactual prompts are generated either manually or with the help of ChatGPT. An example is shown in Figure 7. Figure 7: Evaluation dataset structure: Each factual prompt, sourced from CelebV-Text, is associated with four counterfactual prompts. Each counterfactual (target) represents an intervention on one of the following variables–age, gender, beard, or baldness. Interventions on upstream causal variables (e.g., age or gender) may lead to changes in downstream variables (e.g., beard or baldness), which are automatically incorporated into the counterfactual prompt. # C Additional implementation details For each baseline video editing method (FLATTEN (7), Tune-A-Video (89), and TokenFlow (15)), we adopt the default experimental hyperparameters provided in the original works. In our experiments, we implement the VLM-based textual loss using the GPT-4o model via the OpenAI API. However, our approach is also compatible with local VLMs currently supported by the TextGrad package (97). The LLM used to perform the TextGrad update (Section 3.2) is GPT-4o–the same model used for the VLM loss. We also use the GPT-4o API to compute the VLM minimality metric (Section 3.4), as it offers improved filtering of the causal graph variables in the generated text descriptions. In addition, for the BERT-based semantic text encoder $\tau _ { \phi }$ used in Section 3.4 to generate semantic text embeddings, we leverage the all-MiniLM-L6- $\nu 2$ model (85), which maps the text descriptions into a 384-dimensional vector space. Lastly, to evaluate effectiveness as described in Section 3.3, we utilize the llava-hf/llava-v1.6-mistral-7b-hf model (42). # D Prompts # D.1 Evaluation Instruction We outline the methodology used to construct the evaluation instruction prompt for the VLM-based textual loss, as described in Section 3.2. First, given the factual (source) prompt of the original video and the initial counterfactual (target) prompt derived as described in B–we programmatically extract the target interventions by comparing the two. Below, we provide representative examples. Listing 1: Target Interventions Extraction Listing 2: VLM Evaluation Instruction Given the initial counterfactual prompt and the target interventions, we provide the VLM with the following evaluation instruction: - You are given an image of a person’s face. - A counterfactual target prompt is provided: {counterfactual_prompt} - Corresponding interventions are specified: {target_interventions} - Evaluate how well the given image aligns with the specified counterfactual attributes in the target prompt. - Calculate an accuracy score based only on the attributes that were explicitly modified (i.e., the interventions). - Do not describe or alter any other visual elements such as expression, hairstyle, background, clothing, or lighting. - Identify and list any attributes from the interventions that are missing or incorrectly rendered. - Criticize. - Suggest improvements to the counterfactual prompt to better express the intended interventions. - The optimized prompt should maintain a similar structure to the original prompt. - If the alignment is sufficient, return: "No optimization is needed". # D.2 Causal decoupling prompt We further augment the evaluation instruction prompt with a causal decoupling prompt, as described in Section 3.2, in cases where interventions involve downstream variables (e.g., beard, bald) in the causal graph. If either beard or bald appears in target_interventions, do not include references to age or gender. This results in optimized prompts that exclude references to upstream variables (e.g., age, gender), effectively breaking the assumed causal relationships and simulating graph mutilation (55). By using such prompts, the LDM backbone of the video editing method can generate videos that violate the assumptions of the causal graph–for example, by adding a beard to a woman. # E Evaluative textual feedback generated by the VLM-based Loss For demonstration purposes, we provide the textual feedback derived from our proposed VLM-based loss during the prompt optimization process for the first video in Figure 1 (transforming an old woman into a young one) with the TokenFlow (15) editing method. In addition, we present the corresponding textual gradient $\textstyle { \frac { \partial { \mathcal { L } } } { \partial { \mathcal { P } } } }$ , which is used to update the initial prompt via the TextGrad (97). First, we generate the counterfactual video using the initial counterfactual prompt (A woman is young), which represents an intervention on the age variable. Then, we provide a generated counterfactual frame to the VLM for evaluation. Figure 8: Counterfactual video generated using the initial prompt, which failed to incorporate the target intervention (young). Below, we present the textual feedback produced by the VLM given a frame from the ineffective generated counterfactual video of Figure 8. The image does not align well with the specified counterfactual attributes from the target prompt. The intervention specified was "young," but the person depicted in the image appears to be an older adult, likely in her 60s or 70s, based on visible signs of aging such as wrinkles and grey hair. \*\*Accuracy Score: $\ast \ast \ 0 / 1$ (The attribute "young" was not correctly rendered) \*\*Failed Attributes:\*\* - Youthfulness: The individual in the image does not appear young. \*\*Suggested Improvement to the Counterfactual Prompt: $^ { * * }$ "A woman in her early twenties" This optimized prompt more specifically targets the desired age range, enhancing the likelihood that the generated image will meet the counterfactual criteria. In addition, we display the textual gradients $\textstyle { \frac { \partial { \mathcal { L } } } { \partial { \mathcal { P } } } }$ , which contain detailed criticisms and suggestions for improving the prompt based on the VLM’s textual feedback. The variable "A woman is young" lacks specificity and clarity, which likely contributed to the misalignment between the generated image and the intended attribute of youthfulness. The term "young" is highly subjective and can be interpreted variably depending on cultural, personal, and contextual factors. This ambiguity in the prompt may have led the Text-to-Image model to generate an image that does not meet the counterfactual criteria specified. \*\*Feedback and Criticism: $^ { * * }$ 1. \*\*Specificity in Age Description\*\*: The term "young" should be replaced with a more precise age or age range to guide the model more effectively. For instance , specifying "a woman in her early twenties" or "a woman aged 20-25 years" would provide a clearer target for the model, reducing ambiguity and increasing the likelihood of generating an image that aligns with the youthful attribute desired. 2. $^ { * * }$ Inclusion of Contextual Details\*\*: While the prompt should remain focused on the age attribute, adding minimal contextual details that align with youthful appearances could enhance the effectiveness of the prompt. For example, mentioning "a youthful demeanor" or "fresh-faced" could cue the model to focus on features typically associated with younger individuals. 3. \*\*Avoiding Overly Broad Descriptors\*\*: The use of broad descriptors like "young" without additional qualifiers can lead to a wide range of interpretations. It’ s beneficial to avoid these where precision is required for generating specific outcomes. 4. \*\*Testing and Iteration\*\*: After adjusting the prompt, it would be advisable to test the new prompt with the model to see if the changes yield a more accurate representation. This iterative approach can help fine-tune the prompt based on actual outcomes. By addressing these points, the prompt "A woman is young" can be optimized to more effectively communicate the desired attribute of youthfulness to the Text-toImage model, thereby improving the alignment of the generated image with the counterfactual target. The textual gradients $\textstyle { \frac { \partial { \mathcal { L } } } { \partial { \mathcal { P } } } }$ are provided as input to Textual Gradient Descent (97), which leverages an LLM to update the optimized variable (prompt), as described in Equation 3. After the TGD update the counterfactual prompt becomes: A woman in her early 20s with vibrant expression. After 1 textual gradient step: A woman in her early 20s with vibrant expression Listing 6: VLM feedback from the second prompt optimization iteration The input frame aligns well with the specified counterfactual attribute of appearing "young." The individual in the image presents as a young adult, which matches the intervention target of portraying youth. Therefore, the accuracy score based on the attribute of appearing young is high. No attributes from the interventions failed to appear or were incorrectly rendered in this context. Since the image successfully aligns with the desired attribute of youth, there is no need for optimization of the prompt. The response is "no_optimization". In Listing 6, we display the textual feedback from the VLM after providing it with a frame from the effective counterfactual video generated using the optimized prompt (Figure 9). With this prompt, the age intervention (young) is successfully incorporated. Consequently, the VLM returns a "no optimization" response, and the prompt optimization process terminates. # F VLMs for Assessing Effectiveness and Minimality # F.1 Effectiveness We present the VLM pipeline for evaluating causal effectiveness. As shown in Figure 10, the VLM receives as input the generated counterfactual frame and a multiple-choice question–extracted from the counterfactual prompt that corresponds to the intervened attribute. Since we edit static attributes, a single frame is sufficient to assess the effectiveness of the interventions. An accuracy score is calculated across all generated counterfactual frames for each intervened variable (age, gender, beard, baldness) as described in Section 3.3. Figure 9: Counterfactual video generated using the optimized prompt, which successfully incorporates the target intervention (young). Figure 10: VLM causal effectiveness pipeline: example of a beard intervention. # F.2 Minimality In Figure 11, we showcase the VLM pipeline for evaluating minimality, as described in Section 3.4. The VLM takes as input frames extracted from the factual and counterfactual videos and produces text descriptions that exclude attributes from the causal graph. These text descriptions are then passed through a BERT-based semantic encoder (85) to generate semantic embeddings. The final minimality score is computed as the cosine similarity between these embeddings. The exact prompt used to instruct the VLM to filter the text descriptions from the causal graph variables is provided in Listing 7. Figure 11: VLM minimality pipeline: example of a gender intervention. # Listing 7: VLM minimality prompt Describe this frame in detail. Remove any references to age, gender (man, woman, he, she), beard, hair (including hairstyle, color, style, and facial hair), and baldness from the description. Return only the filtered version of the text, without commentary or formatting. In Figure 12, we display the filtered text descriptions produced by the VLM. This specific factual and counterfactual pair achieves a VLM minimality score of 0.882. We observe that by measuring the semantic similarity of the VLM-generated text descriptions, we can isolate factors of variation not captured by the causal graph and effectively measure their changes under interventions on the causal variables. Filtered text description Factual The individual in the image is smiling and appears to be outdoors, with a backdrop of lush green foliage. The person is wearing a striped shirt in shades of green and orange. The skin tone is a natural hue, and the overall expression conveys a sense of happiness or contentment. Counterfactual The individual in the image is wearing a striped shirt with bright colors. The background is a blurred natural setting, likely a park or forest area. The person is smiling and looking to the side, giving an expression of happiness or contentment. The skin tone appears to be an unusual shade, suggesting the image might have been altered with a color filter. Figure 12: Filtered text descriptions derived from the VLM # G More qualitative results We present additional qualitative results generated using our proposed VLM-causal steering approach for video counterfactual generation. Figure 13: First panel: intervention on beard. Second panel: intervention on age. Figure 14: First panel: intervention on age. Second panel: intervention on gender. Figure 15: Interventions on age. # G.1 More qualitative results with VLM Causal Steering Figure 16: First panel: Interventions on age. Second panel: Interventions on bald.
Adapting text-to-image (T2I) latent diffusion models for video editing has shown strong visual fidelity and controllability, but challenges remain in maintaining causal relationships in video content. Edits affecting causally dependent attributes risk generating unrealistic or misleading outcomes if these relationships are ignored. In this work, we propose a causally faithful framework for counterfactual video generation, guided by a vision-language model (VLM). Our method is agnostic to the underlying video editing system and does not require access to its internal mechanisms or finetuning. Instead, we guide the generation by optimizing text prompts based on an assumed causal graph, addressing the challenge of latent space control in LDMs. We evaluate our approach using standard video quality metrics and counterfactual-specific criteria, such as causal effectiveness and minimality. Our results demonstrate that causally faithful video counterfactuals can be effectively generated within the learned distribution of LDMs through prompt-based causal steering. With its compatibility with any black-box video editing system, our method holds significant potential for generating realistic "what-if" video scenarios in diverse areas such as healthcare and digital media.
[ "cs.CV", "cs.AI" ]
# 1 Introduction Large Language Models (LLMs) have dramatically transformed the landscape of information access, enabling systems that transcend traditional ranked retrieval and instead generate comprehensive, report-style answers to complex, multi-faceted queries. These deep research systems, exemplified by recent commercial developments such as OpenAI, Gemini, and Perplexity’s deep research modes, as well as several open-source replicas (e.g., GPT Researcher, Open Deep Research, etc.) and similar academic endeavors [12, 13, 18, 19, 42, 50, 58], integrate iterative search, retrieval, and reasoning over vast textual corpora to address intricate user needs. With deep research pipelines supporting recent developments such as artificial scientists capable of generating top-tier conference papers [10, 27, 39, 44, 45, 52], and as industry leaders like Google and Apple signal a strategic shift towards AI-driven search and research solutions, it is clear that the future of information access lies in increasingly autonomous systems, capable of navigating and synthesizing largescale data for complex information needs. While deep research systems have already demonstrated their potential, a critical frontier remains underexplored: the integration of geo-temporal reasoning into deep research pipelines. We argue that incorporating the geographic and temporal dimensions into information access is essential for answering complex, contextually rich questions that span space and time [31, 34], which are commonly involved in producing actionable insights relevant to a broad spectrum of disciplines, from public health and environmental science to economics and urban planning. For instance, queries may seek regulatory changes across regions, environmental trends over decades, or health policy impacts in specific communities. However, extending existing deep research systems to support geo-temporal constraints introduces substantial new challenges. By design, deep research systems are agentic, relying on planned iterative retrieval and reasoning over large text datasets (e.g., LLMs are used for planning, exploration, and tool calling, iteratively diving deeper into subtopics while maintaining a comprehensive view of the research subject). To enable geo-temporal reasoning, these processes must be augmented with techniques for geo-temporal text analysis and retrieval [2, 3, 14, 31, 34, 53], for instance including query planning and generation strategies that account for constraints in terms of place names, geographic footprints expressed through points or polygonal boundaries, calendar dates or time intervals, and ranked retrieval methods that account with proximity and/or containment relations [2, 3, 14], as well as with the need for result diversity across the geo-temporal dimensions [25, 26]. The necessarily limited window of information that can be effectively processed by LLMs further constrains these systems, making it imperative for retrieval mechanisms to consider the salience and relevance of documents with respect to geo-temporal specificity. Moreover, accurate synthesis of retrieved information depends on precise parsing and disambiguation of geo-temporal cues in text [7, 22, 24, 57], potentially requiring external tools such as geo-coders, gazetteers, and mapping services to accurately ground information into real-world contexts, given that LLMs alone are likely to produce many errors in these tasks [1]. Moreover, developing geo-temporal deep research systems is not solely a technical challenge, but also an infrastructural one. For instance, current research in deep information retrieval and synthesis is constrained by the reliance on commercial Web search APIs, whose proprietary nature limits transparency, and where search result evolution undermines reproducibility [4]. This hinders progress, as new developments require access to large-scale, stable, and openly accessible platforms, capable of supporting comprehensive retrieval experiments. To support advancements, the research community must invest in building environments with broad and diverse coverage, i.e. search infrastructures that (a) simulate real-world conditions, (b) index large Web corpora, (c) incorporate advanced retrieval models supporting fine-grained queries and ranked retrieval involving geo-temporal constraints, and (d) incorporate geo-temporal modules for specialized tasks such as geocoding and reverse geo-coding, map routing, or the computation of geo-spatial statistics, aggregations, and interpolations. Such environments would empower researchers to audit system behavior, analyze evidence influence, and re-run experiments under consistent conditions, ensuring fair benchmarking and reproducibility. Equally crucial is the development of rigorous evaluation protocols tailored to geo-temporal deep research, aimed at assessing progress in tasks that entail the extraction and analysis of geotemporally constrained data from diverse sources, including policy documents, scientific literature, or general Web content. For traditional retrieval augmented generation systems, and even for deep research, some benchmarks have already been established as testbeds for assessing retrieval effectiveness and response quality in complex high-engagement queries [4, 15, 28, 30, 32, 36, 36– 38, 41, 46, 50]. These existing benchmarks can inform the creation of new ones, and previous methodologies, including LLM-as-a-Judge approaches [8, 16, 17], can perhaps also be extended to evaluate systems’ capacity to align with geo-temporal constraints, assessing not only factual grounding and report quality, but also spatial and temporal relevance, coherence, and diversity. This paper presents a vision towards advanced deep research systems with geo-temporal capabilities - see Figure 1 for a high-level architectural overview. Research towards this vision must address the technical, infrastructural, and evaluative challenges inherent in integrating geo-temporal reasoning into deep research pipelines. One such research program is also interdisciplinary in nature, as previous developments in geographical [3, 22, 24, 34, 53, 57] and temporal [2, 7, 14, 31, 48, 48] text analysis and retrieval, and geo-temporal knowledge representation and reasoning [6, 11, 22, 23, 47, 51, 55], can now be combined with recent LLM advances. We call for the development of open, reproducible platforms and methodologies that enable meaningful progress in this exciting field, with the potential for transforming how we access, synthesize, and act on information in a world defined by geo-temporal interdependencies. # 2 Towards Geo-Temporal Information Synthesis This section outlines our vision for deep research systems enhanced with geo-temporal capabilities. Section 2.1 explores the broad and complex spectrum of questions that such systems must be equipped to address. Section 2.2 discusses the essential building blocks required to develop these systems. Finally, Section 2.3 highlights the critical considerations for designing open evaluation protocols that can accurately measure system effectiveness and reliability. # 2.1 Complex Geo-Temporal Information Needs Geo-temporal deep research systems have the potential to address a diverse range of complex information needs, combining iterative retrieval, reasoning, and synthesis across corpora. For example, they can be used to uncover geo-temporal correlations between environmental factors and human health outcomes, track the evolution of regulatory and policy landscapes over time and across regions, and provide comprehensive analyses of disaster and crisis events as they unfold. Additionally, they can explore dynamic phenomena such as epidemiological trends, urban development patterns, economic changes, environmental shifts, and demographic transformations. By integrating geo-temporal reasoning into retrieval and synthesis pipelines, these systems have the potential to enable nuanced insights that are critical for decision-making across disciplines. Crucially, the focus of geo-temporal deep research is not on automating GIS-style spatial analysis – including code generation or tool execution for complex spatial computations – but rather on orchestrating processes for iterative retrieval, reasoning, and synthesis from vast collections of unstructured textual documents. The goal is to produce comprehensive, report-style answers that integrate diverse forms of evidence. Even though geo-temporal analysis is required, and some cases can consider structured tabular outputs (e.g., summarizing data by region or time period), the focus is on text analysis for producing reports, e.g. which can later be incorporated into external GIS workflows, if needed. The emphasis on text-based synthesis distinguishes the envisioned systems from previous efforts aimed at automating GIS workflows [20, 21, 56], although both types of approaches can indeed be combined. A list of question types, illustrating the breadth and complexity of the information needs that can be supported, is given next: Multi-hop: Queries requiring information synthesis from multiple sources to construct a complete answer (e.g., what policies have been implemented in response to air pollution incidents in European urban centers over the last decade, and how have these policies affected public health outcomes?). • Long-tail: Questions about obscure facts, rare entities, or low-resource domains with limited coverage (e.g., what is known about the economic impact of a small rural festival in a specific region of Northern Europe?). Time-sensitive: Questions involving explicit or implicit time anchors and/or temporal constraints, likely also involving the disambiguation of vague temporal expressions (e.g., what were the immediate policy responses to the COVID-19 outbreak in different European countries in early 2020?). • Location-sensitive: Questions involving explicit or implicit geospatial anchors, likely requiring the handling of vague references to specific regions, cities, or landmarks, and likely also requiring reasoning about containment or proximity between locations (e.g., what environmental policies have been enacted near the Amazon rainforest in the last five years?). Freshness-sensitive: Real-time queries needing recent information, likely requiring real-time retrieval from sources like news or social media (e.g., what are the most recent developments in wildfire containment efforts in California?). • Diversity-sensitive: Queries seeking a breadth of perspectives, covering geographic, temporal, cultural, or thematic diversity (e.g., what are the different community responses to renewable energy projects across Europe?). • Distracting information: Queries susceptible to noise or false data, due to their focus on ambiguous or controversial topics (e.g., what are the current theories about the origins of a controversial archaeological site, and how do they differ?). • False premise: Questions based on incorrect assumptions, requiring systems to detect and correct the underlying misunderstandings (e.g., what are the current prices of high-end CPUs and/or GPUs manufactured in Europe?). • Temporal comparison: Queries asking for comparisons across different time periods (e.g., how have hurricane frequencies in the Caribbean changed over the past 50 years?). • Geographical comparison: Queries asking for comparisons across different places (e.g., how do air quality levels compare between urban and rural areas in Europe?). • Temporal aggregation: Queries that necessitate aggregating information across time (e.g., what is the cumulative economic loss, for different southern European countries, caused by natural disasters in the last decade?). • Geographical aggregation: Queries that require aggregating information across space (e.g., how many endangered species exist across phyla and geographical continents?). • Granularity-sensitive: Queries that require reasoning over different levels of spatial or temporal granularity, likely involving nested or hierarchical relationships (e.g., what trends emerge in European employment rates, when comparing monthly vs. quarterly data at regional and national levels?). Temporal ordering: Queries about the sequence or order of events over time (e.g., what events led up to the signing of a major environmental treaty?). Notice that most real-world information needs are inherently multi-faceted, involving complex constraints that span multiple dimensions – temporal, geographic, and thematic. Even the example questions provided in the previous list frequently combine these elements, illustrating the need for integrated reasoning across diverse contexts. For instance, a query like what were the key differences in health policy decisions made by coastal cities during the early stages of the COVID-19 pandemic, and how did these decisions evolve over the following year exemplifies the overall complexity, requiring temporal tracking, geographical specificity, and thematic depth. The range of question types that was given as example not only underscores the breadth of information needs these systems must support, but also hints at the architectural components that are essential for such systems – including capabilities for planned iterative retrieval, geo-temporal reasoning, and evidence synthesis – as well as the evaluation protocols necessary to assess effectiveness. These aspects will be further explored in the next sub-sections. # 2.2 Architectural Components Designing effective geo-temporal deep research systems introduces a range of architectural challenges that go beyond those encountered in standard deep research frameworks. At the heart of these challenges lies the need to handle geo-temporal constraints in user queries, necessitating careful consideration in the design of components for query generation, initial evidence retrieval, re-ranking, and information synthesis. Figure 1 provides a general illustration for the main components involved in one such system, and as most components would likely involve neural models, the entire system (or some of the key components) can be adjusted end-toend through reinforcement learning, using task-specific evaluation metrics as reward functions for training [19, 42, 43, 58]. Query generation plays a pivotal role in translating information needs into actionable search queries. Unlike standard systems that plan and formulate queries primarily with a topical focus, geotemporal deep research systems must construct queries that explicitly capture geo-temporal constraints, such as particular locations, regions, dates, or time periods of interest. This includes generating queries that seek not only geo-temporally localized information, but also diverse results spanning multiple spatial regions or temporal windows. Techniques from Geographic Information Retrieval (GIR) and Temporal Information Retrieval (TIR) provide inspiration here [31, 34], offering strategies for extracting and integrating geo-temporal signals from natural language queries. The retrieval and re-ranking components also require adaptations to deal with geo-temporal queries [2, 3, 14, 31, 34, 53], and at the same time they must operate under constraints of relevance, precision, and scalability, particularly as LLMs can only process a limited number of documents to effectively summarize content and synthesize reports. This requires highly accurate retrieval pipelines that can identify and prioritize the most relevant documents. Here, instruction-following and reasoning-based retrieval models offer promising directions [5, 40, 49, 54, 59, 60], combining the structured handling of geo-temporal constraints, as in GIR/TIR systems, with the flexibility and task-following capabilities of advanced LLMs. These approaches can better interpret complex queries (e.g., those requiring reasoning about proximity, containment, or temporal ordering), and identify small and high-quality evidence sets. A significant limitation in current deep research systems arises from infrastructure dependencies [4]. Most state-of-the-art systems rely on commercial Web search APIs to perform retrieval, which introduces critical challenges. These APIs are opaque, hindering transparency in the retrieval process, and their evolving nature undermines reproducibility and fair benchmarking. Geo-temporal deep research systems, also due to their specialized retrieval needs, further motivate the need for dedicated infrastructures. These should consider document collections with broad and diverse coverage, embedding models fine-tuned for effective geo-temporal retrieval, and scalable nearest-neighbor search indices that can efficiently handle complex, high-dimensional retrieval tasks. Developing such infrastructures in the open research ecosystem is essential for progress. Besides query planning and document retrieval, the summarization and reasoning components also involve additional considerations to synthesize information across multiple documents, in a way that incorporates the geo-temporal context [11, 48, 53]. This involves not only integrating retrieved content, but also accurately interpreting and disambiguating geo-temporal information [7, 9, 22, 24, 57], such as place names, addresses, temporal expressions, and complex spatial relationships. LLMs used for summarization may require multiple processing steps, coordinating tasks such as geo-coding, reverse geo-coding, map-based reasoning, and even performing lightweight GIS operations—like aggregating counts or interpolating values over spatial regions. These capabilities can be improved and further augmented by integrating external tools and services (e.g., gazetteers, geo-coding APIs, routing services, or temporal reasoning libraries) to support the generation of precise and grounded responses. Again, previous developments in GIR/TIR can inform component design [11, 22, 31, 34, 53]. Overall, the architecture of geo-temporal deep research systems must harmonize sophisticated retrieval pipelines, stable and transparent infrastructures, and advanced reasoning components. This orchestration supports the generation of long-form, spatiotemporally aware reports that address complex information needs, a capability well beyond what current systems can achieve. # 2.3 Challenges for Evaluation Besides properly evaluating individual components, advancing geotemporal deep research requires comprehensive evaluation protocols that can assess overall performance under realistic scenarios. While previous efforts, e.g. the TREC RAG shared task [32, 33, 46] and others [15, 28, 30, 36–38, 41], have provided valuable baselines for the evaluation of Retrieval-Augmented Generation (RAG) systems, these approaches fall short of addressing the unique challenges introduced by deep research. RAG typically operates over static domain-specific corpora and is evaluated using standard retrieval scores (e.g., NDCG) and text generation metrics based on the overlap with ground-truth answers. Deep research, on the other hand, is characterized by interactive agentic workflows over largescale collections, including the ever-changing Web, which introduces challenges not only for system design but also in defining fair, reproducible, and meaningful evaluation methodologies [4, 36, 50]. One of the central challenges lies in the nature of the outputs generated by deep research systems, which are long-form, reportstyle answers that integrate information from multiple sources, often involving multi-step reasoning. Evaluating such outputs goes well beyond simple fact-checking or retrieval relevance assessments. Metrics like FActScore [29] or Key Point Recall [35] offer a partial solution, e.g. by measuring the coverage of salient points from retrieved sources in long-form outputs. However, even these metrics are limited in scope, especially when considering geo-temporal dimensions. Alternatively, methods like LLM-as-a-Judge [8, 16, 17], which use language models to assess the quality, relevance, and faithfulness of generated outputs, can be extended for deep research. Yet, all these methods need adaptations to account for geo-temporal relevance, coherence, and diversity – criteria that are essential for real-world information needs involving geo-temporal reasoning. Building on appropriate metrics, benchmarks also play a crucial role in shaping evaluation. Datasets like researchy questions [36] or InfoDeepSeek [50] can provide a starting point for building benchmarks with complex high-engagement queries. However, these datasets need to be adapted to focus specifically on geotemporal constraints, either by filtering the questions to retain those with explicit geo-temporal elements, or by creating new questions reflecting these dimensions. Inspiration can also be drawn from datasets designed for assessing GIScience-related capabilities in LLMs [6, 23, 47, 51, 55], although these primarily focus on shortanswer queries, underscoring the need for datasets that address the complexities of long-form, geo-temporally grounded outputs. The reproducibility challenge further complicates evaluation. Because deep research systems often retrieve information from dynamic Web collections, it becomes difficult to ensure consistent conditions across tests [4]. Evaluation protocols must therefore be supported by retrieval infrastructures that offer stable, broad, and diverse coverage, and that can faithfully simulate real-world settings. Without this, it is nearly impossible to conduct fair comparisons of system performance, or to track progress over time. Ultimately, evaluating geo-temporal deep research systems requires a multi-faceted approach. This involves adapting existing methodologies, e.g LLM-as-a-Judge [8, 16, 17], to incorporate assessments of geo-temporal relevance, coherence, and diversity. It also demands the construction of new, open benchmarks that represent complex, real-world information needs. Only through rigorous, comprehensive, and reproducible evaluation protocols can we ensure that these systems are not only technically capable, but also effective and trustworthy in addressing the increasingly complex geo-temporal information needs of modern users.
The emergence of Large Language Models (LLMs) has transformed information access, with current LLMs also powering deep research systems that can generate comprehensive report-style answers, through planned iterative search, retrieval, and reasoning. Still, current deep research systems lack the geo-temporal capabilities that are essential for answering context-rich questions involving geographic and/or temporal constraints, frequently occurring in domains like public health, environmental science, or socio-economic analysis. This paper reports our vision towards next generation systems, identifying important technical, infrastructural, and evaluative challenges in integrating geo-temporal reasoning into deep research pipelines. We argue for augmenting retrieval and synthesis processes with the ability to handle geo-temporal constraints, supported by open and reproducible infrastructures and rigorous evaluation protocols. Our vision outlines a path towards more advanced and geo-temporally aware deep research systems, of potential impact to the future of AI-driven information access.
[ "cs.CL", "cs.IR" ]
# 1 Introduction Detecting human activities from still images is a challenging problem in computer vision, largely due to the subtle and complex variations inherent in human behaviour. In this work, we address this task using a subset of MSCOCO 2017 validation split introduced by Lin et al. (2015), each labeled as walking/running, sitting, or standing. We begin with two baseline models — Convolutional Neural Network (CNN) and Feedforward Neural Network (FNN) — and then enhance performance through data augmentations, dropout, weight decay, and early stopping. To utilize broader visual knowledge, we apply transfer learning with pretrained Vision Transformers and contrastive models (e.g., CLIP) and explore multimodal embeddings for richer feature representations. We detail the preprocessing steps, model configurations, and evaluation metrics to enable transparent comparison across all methods. # 2 Description of Data and Methods # 2.1 Data The dataset for this study is drawn from a carefully curated subset of the Microsoft COCO (Common Objects in Context) validation split, originally introduced by Lin et al. (2015) and now a gold standard benchmark in computer vision tasks such as object detection, segmentation and image captioning. From the full COCO set, we selected 285 images depicting exactly one of the three human activities—walking/running (98 images), sitting (95 images), or standing (92 images) —yielding a nearly balanced three-way classification problem. All images were downloaded directly via their URLS, and none were discarded due to corruption or missing annotations, confirming complete data integrity. A few examples of the training dataset are shown in Figure 1. After performing a detailed exploratory data analysis (EDA), it can be seen that the images range from 300 to 640 pixels in width, clustering around an average of approximately 566 pixels, and from 240 to 640 pixels in height, centred near 499 pixels, as shown in Table 1. Moreover, when comparing the distribution plots for walking/running, sitting, and standing, their medians, interquartile ranges, and overall distributions align almost perfectly. This indicates that no activity label consistently contains larger or smaller images. Table 1 also highlights that the class frequencies remain nearly identical, with each activity accounting for roughly one-third of the dataset, so no label imbalance is expected to bias model training. The scatter and box plots in Figure 2 further confirm that no class systematically contains larger or smaller images. At the same time, the aspect-ratio histogram exhibits two dominant modes around 1.0 (square image) and 1.33 (4:3), with an overall mean ratio of approximately 1.2. Together, these findings demonstrate that the dataset is inherently balanced and free of resolution- or framing-based biases. This means that any necessary standardisation (e.g. image resizing and normalisation) can be applied uniformly, and the augmentation strategies need to focus only on semantic diversity rather than correcting for class-specific size or aspect ratio artefacts. Figure 2: Exploratory data analysis of image dimensions: (a) width vs. height scatter, (b) height/width box-plots grouped by activity, (c) distribution of aspect ratios, and (d) overlaid height and width histograms. Table 1: Dataset and per-class image statistics. Ranges (width: 300–640 px; height: $2 4 0 { - } 6 4 0 \ \mathrm { p x } )$ apply across all classes. No missing or corrupted entries were found. # 2.2 Models In this work, we compare several neural network architectures and transfer learning approaches for image classification. In the following subsections, we describe each model and the underlying design choices. CNN and FNN The CNN_base model follows a classic convolutional design: three blocks of $3 \times 3$ convolutions (padding $^ { = 1 }$ ) each succeeded by ReLU activations and $2 \times 2$ max-pooling, with the resulting feature map flattened into a two-layer fully connected classifier. By exploiting local spatial correlations and hierarchical feature extraction, it embodies the core principles of convolutional networks (Lecun et al., 1998; O’Shea and Nash, 2015) and serves as a robust baseline. In contrast, the FNN_base model treats each image as a flat vector passed through successive dense layers with nonlinearities. Lacking the spatial inductive biases and weight sharing of convolutions, this fully connected architecture consistently underperforms on visual data (LeCun et al., 2015). Generalising CNN The CNN_gen model extends the baseline CNN architecture by integrating several regularization and normalization techniques to improve generalization. Batch normalization (Ioffe and Szegedy, 2015) is applied after each convolution to stabilize activations, and dropout layers are interleaved with the convolutional and fullyconnected blocks to prevent overfitting. These modifications, combined with data augmentation during training, enable the CNN_gen model to achieve better performance on unseen data while maintaining the simplicity of convolutional feature extraction. Transfer Learning for Binary Classification We leverage a pretrained Vision Transformer (ViT) backbone, which splits each image into $1 6 \times$ 16 patches and processes the resulting sequence through standard transformer blocks to capture long-range dependencies and global context (Dosovitskiy et al., 2021). Transformers’ self-attention mechanism and large-scale pretraining yield highly generalizable feature representations, making them particularly well suited for transfer learning across diverse vision tasks. In addition, we fine-tune two modern vision–language encoders — CLIP (Radford et al., 2021) and SigLIP2 (Tschannen et al., 2025) — by replacing their projection heads with a two-way classifier and optimizing all parameters end-to-end under a cross-entropy objective. Transfer Learning for Multiclass Classification For multiclass tasks, we adopt the same pretrained ViT, CLIP, and SigLIP2 models, extending each classification head to output $C$ logits, where $C$ is the number of target categories. All three backbones are fine-tuned jointly with the new head under cross-entropy loss, allowing them to adapt their rich, pretrained representations to the specific demands of our domain-specific multiclass classification problem. CLIP Image Embeddings We use CLIP embeddings in two complementary ways. In the first setting, we treat CLIP (Radford et al., 2021) as a pure image encoder: given a image we encode it to obtain a fixed-length vector of dimension $d$ (typically 512). These image vectors are paired with their ground-truth labels to build a simple PyTorch dataset, which we split into training, validation, and test subsets. We then train a small multilayer perceptron (MultimodalClassifier) on top of the raw CLIP embeddings, consisting of several fully connected layers with ReLU, batch normalization, and dropout, and optimize with cross-entropy loss. This setup tests how linearly separable the CLIP image representations are for our target classes. CLIP Image-Text Embeddings In the second setting, we take advantage of CLIP’s joint image–text space by also encoding a set of textual label descriptions (e.g., “walking”, “standing”, “sitting”). We compute the cosine similarity between each image embedding and each label embedding via cosine similarity, producing an $N \times C$ similarity matrix (where $N$ is the number of images and $C$ the number of classes). Each row of this matrix — one cosine score per class — serves as a compact, semantically meaningful feature vector. We then train a second MLP (FeatureClassifier) on these similarity features, allowing the model to directly leverage the semantic affinity between images and label text without relying solely on the high-dimensional raw embeddings. # 2.3 Experimental Approach We partitioned the dataset into training, validation, and test sets using an $8 0 \% - 1 0 \% - 1 0 \%$ stratified split with a fixed random seed of 42 to ensure reproducibility. All experiments were run on a single NVIDIA T4 GPU with 16GB of memory. To account for variability, each configuration was executed five times, and we conducted a one-way ANOVA on the resulting performance scores to assess the statistical significance of our findings. We have evaluated our models using accuracy, recall, precision and F1 scores. All our models have been implemented in PyTorch (Paszke et al., 2019) to improve ease of understanding. # 3 Results In this section we first compare the two from–scratch baselines (CNN_base and FNN_base), then present the impact of generalisation techniques (CNN_gen), followed by all transfer-learning and multimodal variants. Table 2: Test performance of CNN_base vs. FNN_base. CNN and FNN As Table 2 illustrates, the CNN_base model outperforms its fully connected counterpart — raising accuracy from $3 2 . 4 \%$ to $3 8 . 6 \%$ and achieving higher precision and F1 score — emphasising how convolutional layers’ spatial inductive biases yield richer feature representations than a parameter-matched dense network. Table 3: Effect of different augmentations on CNN performance evaluations on the Validation set. Extending CNN Table 3 summarizes validation performance across ten augmentation strategies. Simple geometric transforms, particularly vertical flips, nearly doubled baseline accuracy and substantially boosted recall, while perspective transforms yielded the highest precision with only a modest drop in overall accuracy. Random resized crops also provided consistent gains. In contrast, aggressive combinations or colour-based perturbations, such as colour jitter and grayscale, often degrade performance, indicating that excessive or semantically misleading distortions can hinder learning. Figure 3: Evaluation of ViT model trained on binary classes using transfer learning. Table 4: Overall performance of the augmented CNN (CNN_gen) vs. baseline. Informed by the augmentation study, we built CNN_gen by applying vertical flips, perspective transforms and random resized crops, and by adding batch normalization and dropout within each convolutional block. As Table 4 shows, this model outperforms CNN_base across all key metrics on the test set — boosting accuracy by over 3 points and delivering comparable improvements in precision, recall and F1 score. These results confirm that combining targeted augmentations with stronger regularization markedly enhances generalization. Table 5: Performance metrics for binary classification models Binary Classification Table 5 compares the binary classification performance when we classify the images into standing or sitting, three transfer-learning models: CLIP, ViT, and SigLIP2. CLIP emerges clearly on top, leveraging its joint image–text training to capture class-relevant cues that ViT and SigLIP2 miss. Although ViT matches CLIP’s ability to avoid false positives, it struggles to recall all positive instances — often confusing subtle posture shifts — while SigLIP2’s additional multilingual and dense pretraining appears to dilute its focus on our specific activities. Figure 3’s confusion matrix for ViT underscores these patterns, showing a notable fraction of sitting images incorrectly assigned to the standing class. Table 6: Performance metrics for different CLIP settings. $C L I P _ { E M }$ denotes model trained on CLIP image embeddings, $C L I P _ { C S }$ denotes model trained on CLIP image and text cosine similarities CLIP Embeddings As Table 6 makes clear, the MLP built on raw CLIP embeddings outperforms its cosine-similarity-only counterpart across the board — lifting accuracy from $2 7 . 6 \%$ to $3 7 . 9 \%$ and yielding correspondingly higher precision and F1. These results indicate that the high-dimensional CLIP image vectors contain rich discriminative information that a shallow MLP can effectively exploit, whereas relying only on the $C$ -dimensional similarity scores compresses the features too aggressively and hampers class separation. In particular, the steep drop in precision for $C L I P _ { C S }$ suggests it produces many false positives when forced to decide based on cosine relationships alone. Multiclass Classification Table 7 presents the full multiclass comparison. The fine-tuned CLIP_IC model leads by a wide margin, achieving roughly $76 \%$ accuracy and similarly high precision, recall, and F1. It outperforms the next best, SigLIP2, and leaves ViT and our convolutional baselines (CNN_gen, CNN_base) trailing well behind. The shallow embedding-based classifiers $( C L I P _ { E M }$ , $C L I P _ { C S } ,$ occupy the middle ground, while the fully connected FNN_base sits at the bottom. A one-way ANOVA on accuracy across the seven models confirms that these differences are statistically significant $\mathrm { F } { = } 2 3 . 4 5 6 2$ , $\mathrm { { \ p { < } 0 . 0 0 1 } }$ ). A paired t-test between the two from-scratch baselines (CNN_base vs. FNN_base) yields $\mathrm { \ t { = } 1 } . 4 5 0 5$ , $\mathrm { p } { = } 0 . 2 2 0 5$ , indicating no significant difference between them. Together, these results highlight the clear advantage of large-scale, contrastive pretraining (as in CLIP and SigLIP2) over both vanilla convolutional and dense architectures, and the particular strength of CLIP when fine-tuned for our multiclass classification problem. Figure 4: Explainability of model Table 7: Final leaderboard. $C L I P _ { I M }$ denotes a model trained on CLIP image embeddings, $C L I P _ { C S }$ denotes a model trained on CLIP image and text cosine similarities. $C L I P _ { I C }$ denotes the CLIP model finetuned for classification. # 4 Discussion Explainablity To better understand how our best model that is CLIP distinguishes between visually similar human activities, we apply LeGrad (Bousselham et al., 2024), an explainability method tailored for transformer-based vision models. LeGrad computes the gradient of the model’s output logits with respect to each attention map across all ViT layers, then aggregates these signals — combining both intermediate and final token activations — into a single saliency map. In Figure 4, we show the original image (a) alongside the LeGrad maps for model’s predicted classes “standing” (b), “walking_running” (c), and “sitting” (d). These visualisations reveal two key challenges posed by our dataset. First, the fine-grained differences between standing, walking, and sitting settings result in overlapping attention regions, which can confuse the model’s decision boundary. Second, the heterogeneous objects in the image introduce noise into the attention gradients, making it difficult for even a powerful transformer-based encoder to focus exclusively on the human subject. Together, these factors help explain why our highest-accuracy models still struggle to exceed $80 \%$ on these classes and underscore the need for more targeted spatio-temporal features or refined attention mechanisms. Error Analysis Figure 5 illustrates representative failure cases of our models on “standing,” “walking_running,” and “sitting.” We observe that small or partially occluded people are often mistaken for static poses by CNN_base and FNN_base, while low-amplitude motions (e.g. slow walking) confuse ViT and Siglip2, which over-rely on perframe posture cues. Dynamic backgrounds (e.g. moving vehicles or flags) occasionally dominate Siglip2’s embeddings, leading to “standing” predictions, whereas CLIP’s multimodal pretraining improves robustness but still misclassifies very lowresolution actors. Finally, borderline poses (e.g. slight weight shifts) lie near the decision boundary for all models. These systematic errors underscore the need for richer spatio-temporal features and stronger human-focused attention mechanisms to further enhance performance.
Recognising human activity in a single photo enables indexing, safety and assistive applications, yet lacks motion cues. Using 285 MSCOCO images labelled as walking, running, sitting, and standing, scratch CNNs scored 41% accuracy. Fine-tuning multimodal CLIP raised this to 76%, demonstrating that contrastive vision-language pre-training decisively improves still-image action recognition in real-world deployments.
[ "cs.CV", "cs.CL" ]
# 1. Introduction Reward models are a fundamental concept in reinforcement learning and define what an agent optimizes for. For large language models (LLMs), fine-tuning with reward models is a common post-training step to align the model outputs with desired behaviors and objectives. A widely adopted approach is to learn reward models that capture human preferences and fine-tune the LLMs to generate outputs that align with these preferences. Reinforcement learning from human feedback (RLHF) is an early example of such approaches (Christiano et al., 2017; Stiennon et al., 2020). Now, work in this area is underway more broadly. One recent example is a series of models by OpenAI (2024), in which human-like thinking and complex reasoning can be achieved through large-scale reinforcement learning. While quite successful, reward models are costly to apply. This is in part because of the complexity of reinforcement learning algorithms and in part because of the difficulty in annotating training data. There has been much work on simplifying the use of reward models and improving alignment efficiency. One strand of research explores more direct ways to align LLMs with human feedback, employing either supervised fine-tuning methods (Rafailov et al., 2023; Touvron et al., 2023) or inference-time alignment methods (Lee et al., 2021). Another strand of research focuses on replacing human feedback with AI-generated feedback, which is cheaper to obtain (Dubois et al., 2023; Lee et al., 2024). However, although applying reward models to LLMs is a compelling direction, training these models still relies heavily on labeled data. For example, we generally need to collect or create a significant amount of task-specific human preference data and optimize the models with considerable training effort (Stiennon et al., 2020; Xu et al., 2024). If we think about the problem a bit closer from the LLM perspective, we might expect that reward models can be trained on unlabeled data in such a way as to produce a single pre-trained reward model that can be easily adapted to tasks of interest. This would change the way we align LLMs: we can pre-train a foundation model that assembles a broad general knowledge of how to reward, and a single such pre-trained model can be deployed for many particular rewarding tasks with only small costs of further fine-tuning or prompting. This idea is appealing but challenging. The difficulty arises from the fact that the systems cannot directly generate their own supervision signals from text for training reward models, as self-supervision methods do. One approach is to collect large-scale preference data for general use and train a reward model on this data to improve generalization (Cui et al., 2023; Liu et al., 2024). However, in this case, large amounts of unlabeled data are still largely overlooked. In this paper, we propose a solution to this problem that learns reward models not only from human-annotated preference data but also from unlabeled data. To do this, we develop a generative model that can predict, given the input and a pair of responses, which one is better. The training of this model involves two stages. In the first stage, we pre-train the model on input-response data to learn the correspondence between inputs and responses. This process does not require preference-annotated data and so can be easily scaled up to gain more general knowledge of response comparison. In the second stage, we fine-tune the model using human preference data to predict the preference between two responses. The resulting foundation reward model can be directly applied to downstream tasks, such as policy training, or further fine-tuned with a small amount of task-specific data. To make the model generalize better, we incorporate label smoothing into reward model training. We show that the training objective can be reformulated into a nice form: we are essentially optimizing the Bradley-Terry loss (Bradley & Terry, 1952) under the condition of label smoothing. This result is elegant, as it unifies generative and discriminative models in reward modeling to some extent. Though label smoothing seems not so popular in the development of recent LLMs, it turns out to be very beneficial for training generative reward models. The foundation reward models can be applied to a wide range of tasks. In our experiments, we test it in three different settings: response ranking, RLHF, and adaptation. Our model demonstrates strong generalization results across all test cases with little or no fine-tuning effort and improves performance significantly compared with various discriminative and generative baseline models. Notably, when training reward models with the LLaMA-3.1-8B-Instruct model, our model achieves gains of 11.0 and 5.1 points over vanilla discriminative and generative reward models, respectively, on the average accuracy of RewardBench. # 2. Preliminaries In this section, we outline some basic concepts and notations of reward modeling. # 2.1. Training Reward Models In the LLM literature, a reward model is typically written as a function $r _ { \phi } ( x , y )$ , where $\phi$ is the set of model parameters, $x$ is the input, and $y$ is the response. Throughout this work, an “input” can be an arbitrary token sequence fed into an LLM, such as What is the capital of France?, and a “response” is the token sequence produced by the LLM as a result of that input, such as Paris. Figure 1: Architectures of discriminative and generative reward models. In discriminative models, the reward model is a scoring function that is trained to minimize the pairwise ranking loss between two responses. In generative models, we use an LLM to predict the label token given a prompt, an input, and a pair of responses. This model can be trained in the same way as standard LLMs. A widely used architecture of such functions is a Transformer decoder stacked without a Softmax layer on top, as illustrated in Figure 1 (a). This model can be viewed as a discriminative classification model, and is commonly trained using the Bradley-Terry loss, given by $$ \begin{array} { r } { \mathcal { L } _ { \mathrm { d } } = - \mathbb { E } _ { ( x , y _ { a } , y _ { b } ) \sim D _ { r } } } \\ { [ \log ( \sigma ( r _ { \phi } ( x , y _ { a } ) - r _ { \phi } ( x , y _ { b } ) ) ) ] } \end{array} $$ where $D _ { r }$ is the training dataset consisting of tuples of input $x$ and response pair $( y _ { a } , y _ { b } )$ with the preference $y _ { a } \succ$ $y _ { b }$ . While this loss function considers pairwise ranking between responses, the trained reward model is used as a scoring function that assigns a numerical score $r _ { \phi } ( x , y )$ to any response $y$ , together with the corresponding input $x$ . Reward models can also be generative models (Zhang et al., 2024; Shiwen et al., 2024). In this case, we can simply use an LLM as a reward model, as illustrated in Figure 1 (b). This model works as follows. First, we input a prompt $c$ along with the tuple $( x , y _ { a } , y _ { b } )$ , to the LLM. The prompt is a text describing the task. For example, You are given two responses to a user input. Evaluate which response is better based on quality, relevance, and clarity. If the first response is better, return $\cdot _ { A } \cdot$ . If the second response is better, return $\cdot _ { B } ,$ . Then, the LLM predicts subsequent tokens based on this input sequence. Let $w$ be the token predicted by the LLM. If $w = \mathbf { A }$ , it indicates a preference for $y _ { a }$ over $y _ { b }$ ; if $w = \mathbf { B }$ , then $y _ { b }$ is preferred. The loss function can be defined as the log-probability of predicting ‘A’: $$ \begin{array} { r l r } { \mathcal { L } _ { \mathrm { g } } } & { = } & { - \mathbb { E } _ { ( c , x , y _ { a } , y _ { b } ) \sim D _ { r } } [ \log \pi _ { \phi } ( w = \mathrm { A } | s ) ] } \end{array} $$ where $s$ denotes the string $[ c , x , y _ { a } , y _ { b } ] ^ { 1 }$ , and $\pi _ { \phi } ( \cdot )$ denotes the probability of token prediction by the LLM. When applying this model to score a new input-response pair $( x ^ { \prime } , y ^ { \prime } )$ , we generate a reference response $y _ { \mathrm { r e f } }$ by using the LLM, and concatenate $x ^ { \prime } , y ^ { \prime }$ and $y _ { \mathrm { r e f } }$ into $s ^ { \prime } = $ $[ c , x ^ { \prime } , y ^ { \prime } , y _ { \mathrm { r e f } } ]$ . Additionally, to mitigate the positional bias problem (Wang et al., 2023), we introduce an alternative input order by transposing the positions of responses, i.e., presenting $y _ { \mathrm { r e f } }$ before $y ^ { \prime }$ , to construct a secondary input string $s _ { T } ^ { \prime } = [ c , x ^ { \prime } , y _ { \mathrm { r e f } } , y ^ { \prime } ]$ . The reward for $( x ^ { \prime } , y ^ { \prime } )$ is thus defined as the log-probability that $y ^ { \prime }$ is preferred over $y _ { \mathrm { r e f } }$ : $$ \begin{array} { r l r } { r _ { \phi } ( x ^ { \prime } , y ^ { \prime } ) } & { = } & { \frac { \pi _ { \phi } \left( w = \mathbf { A } | s ^ { \prime } \right) + \pi _ { \phi } \left( w = \mathbf { B } | s _ { T } ^ { \prime } \right) } { 2 } } \end{array} $$ where the reward score ranges from 0 to 1. # 2.2. Applying Reward Models Three applications of foundation reward models can be considered in LLMs. One simple application is response ranking, where a number of responses are given, and we score and rank these responses. This approach is often used in reranking the LLM outputs. For example, in best-of- $n$ sampling, we select the best output from the top $n$ candidate outputs via a reward model (Lee et al., 2021; Fernandes et al., 2022; Gao et al., 2023). A second application is reward-based fine-tuning, where the reward model provides feedback to optimize the LLM. For example, in RLHF, a reward model is used in proximal policy optimization (PPO) (Wang et al., 2022) to fine-tune the LLM for better alignment with human preferences (Ouyang et al., 2022; Bai et al., 2022). A third application is reward model adaptation. If we have labeled human preference data for a task, we can fine-tune the reward model further to better adapt it to the task. The fine-tuned reward model can then be applied to LLM finetuning as usual. Figure 2: Accuracies of discriminative and generative reward models on the ID and OOD test sets. # 3. A Generative Foundation Reward Model In this section we describe a Generative foundation Reward Model, called GRAM. # 3.1. Why Generative Models Both discriminative and generative models have been widely adopted in reward modeling, but we found that generative models were more generalizable and better suited to our work. To study this issue, we trained both types of models on a subset of $4 0 0 \mathrm { k }$ and $4 0 \mathrm { k }$ samples from the UnifiedFeedback dataset2. We then evaluated these models on an indistribution (ID) test set, consisting of 1k test samples from Unified-Feedback, and an out-of-distribution (OOD) test set, consisting of $3 \mathrm { k }$ samples from RewardBench (Lambert et al., 2024). As shown in Figure 2, the discriminative model is better on the ID test data, while the generative model is better on the OOD test data. While preliminary, this result supports the finding in previous work by Yang et al. (2024) and Zhang et al. (2024): LLMs can generalize better for reward modeling. In fact, discriminative and generative models have many similar design choices, such as using Transformer decoders (i.e., LLMs) to encode input-response pairs, but the training strategies make them behave quite differently. Training with the pairwise ranking loss provides a strong supervision signal. Given that the reward model is a well-trained LLM, it is more prone to overfit when simply mapping an inputresponse pair to the reward model. Generative models are essentially trained in a similar manner, but with much noisier samples. For example, each time, we need to model two responses in the same sequence, and adding an extra prompt to the sequence introduces more modeling challenges. The diversity and variability of the samples make the training task more difficult, which in turn encourages the model to generalize more. Furthermore, recent research highlights the superior flexibility of generative models compared to discriminative models in their adaptability to various LLM enhancement techniques. For example, the seamless integration of chain-ofthought reasoning within a generative reward model has been shown to improve reward accuracy (Mahan et al., 2024). Beyond merely adopting such reasoning patterns, the generative reward model can be designed to perform long-form reasoning before generating preferences (Chen et al., 2025; Guo et al., 2025). This enhanced flexibility makes the generative model an ideal choice for our work, as we aim to build a more versatile foundation reward model. # 3.2. A Two-stage Training Method Unlike previous work, we do not use only human preference data to train reward models. Instead, we train them using an unsupervised pre-training task and a supervised fine-tuning task, as described in this section. # 3.2.1. TASK 1: PRE-TRAINING Intuitively, one might expect that an LLM-based reward model is capable enough to model the inputs, responses, and correspondences between them, as LLMs have been trained on huge amounts of text. Unfortunately, modeling the inputresponse correspondence is not covered by standard pretraining and fine-tuning tasks for LLMs, and the model still needs to adapt to this modeling task. This has been demonstrated by recent work, where the understanding of responses is found to be very important in modeling human preferences (Wang et al., $2 0 2 4 \mathrm { e }$ ; Zheng et al., 2023). To improve response understanding, we train the LLM to generate responses given the input, as illustrated in Figure 3 (a). We refer to this as a “pre-training” procedure, as it does not require human preference data, although it differs from standard pre-training methods in LLMs. Let $D _ { u }$ be a dataset consisting of tuples of input and pairs of responses with no preference specified. This loss function can be defined as: $$ \begin{array} { r l r } { \mathcal { L } _ { \mathrm { p r e } } } & { = } & { - \mathbb { E } _ { ( x , y _ { a } , y _ { b } ) \sim D _ { u } } \left[ \log \pi _ { \phi } ( [ y _ { a } , y _ { b } ] | x ) \right] } \end{array} $$ Here $y _ { a }$ and $y _ { b }$ can be generated by an LLM. So, building such a dataset is straightforward. This training objective is similar to that used in instruction fine-tuning (Ouyang et al., 2022). The difference between our method and instruction fine-tuning is that we train the LLM to generate two responses, while instruction finetuning trains the LLM to produce a single correct response to the input. By learning the mapping from inputs to varied responses, the model can better understand the responses. Figure 3: Illustration of the two-stage training method. In the first stage, we pre-train the model via response generation, which is an unsupervised task. In the second stage, we fine-tune the model to generate preferences in a standard supervised manner. Furthermore, since the two responses are generated at the same time, the model can gain some general knowledge of response comparison. Note that the order of responses does not matter in pre-training, so we can swap them to create more diverse samples for robust training. # 3.2.2. TASK 2: FINE-TUNING The goal of fine-tuning here is to adapt the model to predict preferences between responses, which is not directly captured by pre-training. We can do this by using the same method described in Eq. (2) (see Figure 3 (b)). As pretraining has helped the model gain knowledge of response comparison from relatively large amounts of data, finetuning is easier and requires much less data. In this work, we consider using general-purpose preference data to finetune our pre-trained model, thereby obtaining a foundation model that works in various rewarding tasks. If users have their own data, such as human-annotated preference data for a specific task, they can fine-tune this model further. # 3.3. Training with Label Smoothing To further improve generalization, we incorporate finetuning with label smoothing. The idea of label smoothing is to redistribute the probability mass across all predicted tokens by distracting a fraction of the probability from the correct token (denoted by $w ^ { \ast }$ ) to the incorrect tokens. For fine-tuning with label smoothing, the loss for a sample $s$ can be defined as: $$ \begin{array} { r c l } { { \mathcal { L } _ { \mathrm { l s } } ( s ) } } & { { = } } & { { \displaystyle - { \mathbb { E } } _ { w \sim | V _ { l } | } \Big [ } } \\ { { } } & { { } } & { { ( 1 - \epsilon ) \cdot { \bf 1 } \big \{ w = w ^ { * } \big \} \log \pi _ { \phi } ( w | s ) } } \\ { { } } & { { } } & { { \displaystyle + \frac { \epsilon } { | V _ { l } | - 1 } \cdot { \bf 1 } \big \{ w \neq w ^ { * } \big \} \log \pi _ { \phi } ( w | s ) \big ] } } \end{array} $$ where $\epsilon$ is the smoothing factor, and $V _ { l }$ is the vocabulary of label tokens. This formula is general and can handle multi-class problems (i.e., $| V _ { l } | > 2 \rangle$ . But, to simplify the discussion, we still restrict ourselves to the case of two classes. Hence, we express the loss as $$ \begin{array} { r l r } { \mathcal { L } _ { \mathrm { l s } } ( s ) } & { = } & { - \left[ ( 1 - \epsilon ) \cdot \log \pi _ { \phi } ( w = \mathbf { A } | s ) \right. } \\ & { } & { \left. + \epsilon \cdot \log \pi _ { \phi } ( w = \mathbf { B } | s ) \right] } \end{array} $$ We can rewrite this formula into another form by noting that $\pi _ { \phi } ( w | s )$ is the output of a Softmax layer. This gives $$ \begin{array} { r l r } { \mathcal { L } _ { \mathrm { l s } } ( s ) } & { = } & { - \left[ ( 1 - \epsilon ) \cdot \log \frac { e ^ { Z _ { a } ( s ) } } { e ^ { Z _ { a } ( s ) } + e ^ { Z _ { b } ( s ) } } \right. } \\ & { } & { \left. + \epsilon \cdot \log \frac { e ^ { Z _ { b } ( s ) } } { e ^ { Z _ { a } ( s ) } + e ^ { Z _ { b } ( s ) } } \right] } \end{array} $$ where $Z _ { a } ( s )$ and $Z _ { b } ( s )$ are the logits input to the Softmax function. Using simple algebra, we obtain: $$ \begin{array} { r c l } { { \mathcal { L } } _ { \mathrm { l s } } ( s ) } & { = } & { \displaystyle - \underbrace { \log \sigma ( Z _ { a } ( s ) - Z _ { b } ( s ) ) } _ { \mathrm { B r a d l e y - T e r r y m o d e l } } } \\ & { ~ } & { \displaystyle + \underbrace { \epsilon \cdot ( Z _ { a } ( s ) - Z _ { b } ( s ) ) } _ { \mathrm { r e g u l a r i z a t i o n ~ t e r m } } } \end{array} $$ See Appendix D.2 for a more detailed derivation. The first term of Eq. (8) is the loss based on the Bradley-Terry model, and the second term serves as a regularization term. This result is quite interesting. We are actually optimizing a regularized Bradley-Terry loss. It also establishes a connection between the discriminative and generative training methods discussed in Section 2.1: both methods essentially train LLMs to perform pairwise ranking. Note that while label smoothing is theoretically appealing (M¨uller et al., 2019; Lukasik et al., 2020), past experience shows that it is not very helpful for LLMs. One recent example of this problem is the experiments by Ruan et al. (2024), in which label smoothing degrades the performance of LLMs in some cases. By contrast, in our work, this technique turns out to be very beneficial. We will see in Figure 10 that using label smoothing is critical in training our reward models. # 4. Experiments We evaluated GRAM on various applications, including its accuracy on the response ranking, effectiveness in rewardbased fine-tuning, and adaptability to rewarding tasks. # 4.1. Setups We initialized our GRAM model with the LLaMA-3.1-8BInstruct and LLaMA-3.2-3B-Instruct models, using a subset of $4 0 0 \mathrm { k }$ samples from Unified-Feedback for each. The learning rates were set to 2e-5 for the first stage and 1e-5 for the second stage, with training conducted over one epoch in each stage. Note that while this data includes preference labels, we do not use these labels in our pre-training process. Instead, we only use the response pairs to simulate unlabeled data and validate the effectiveness of our method. In the second stage, the label smoothing parameter was set to 0.1, with other settings tested as shown in Figure 10. More details can be found in Appendix B. # 4.2. Baselines We compared GRAM with several strong baselines: LLMas-a-Judge, where we prompted LLMs like GPT-4o to generate preferences; open-source reward models, open-source discriminative and generative reward models that approximate 3B or 8B, including ArmoRM-Llama3-8B-v0.1 (Wang et al., 2024d), and others; and training on the same preference dataset, denoting the standard reward models trained on discriminative and generative frameworks using UnifiedFeedback, respectively (Discriminative RM and Generative RM). We also compared GRAM with several approaches designed to enhance generalization. These include the standard Label Smoothing, which prevents the model from becoming overly confident in its predictions; Freeze, which fixes certain parameters of the LLM during training (Zilly et al., 2021); and Regularization, which adds the discriminative reward model loss using the SFT loss function (Yang et al., 2024). # 4.3. Pair-wise Response Ranking Task Setups. The pair-wise ranking is commonly used to test reward models (Lambert et al., 2024). Given test data $D _ { \mathrm { p a i r } } ^ { t } = \{ ( x ^ { t } , y _ { a } ^ { t } , y _ { b } ^ { t } ) \}$ , where $x ^ { t }$ denotes the test input, and $y _ { a } ^ { t }$ and $y _ { b } ^ { t }$ denote its corresponding responses, the task is to identify the preferred response. The test sample $( x ^ { t } , y _ { a } ^ { t } , y _ { b } ^ { t } )$ can be evaluated using a reward model: $$ \mathrm { R a n k } ( y _ { a } ^ { t } , y _ { b } ^ { t } ) = \left\{ \begin{array} { l l } { y _ { a } ^ { t } \succ y _ { b } ^ { t } , } & { \mathrm { i f } r _ { \phi } ( x ^ { t } , y _ { a } ^ { t } ) > r _ { \phi } ( x ^ { t } , y _ { b } ^ { t } ) } \\ { y _ { a } ^ { t } \prec y _ { b } ^ { t } , } & { \mathrm { i f } r _ { \phi } ( x ^ { t } , y _ { b } ^ { t } ) > r _ { \phi } ( x ^ { t } , y _ { a } ^ { t } ) } \\ { \mathrm { T i e } , } & { \mathrm { i f } r _ { \phi } ( x ^ { t } , y _ { a } ^ { t } ) = r _ { \phi } ( x ^ { t } , y _ { b } ^ { t } ) } \end{array} \right. $$ For generative reward models, when calculating the reward score for one of two responses, the other is used as the reference response. For example, to compute $r _ { \phi } ( x ^ { t } , y _ { a } ^ { t } )$ , we use $y _ { b } ^ { t }$ as the reference response. GRAM: A Generative Foundation Reward Model for Reward Generalization Table 1: Accuracies $( \% )$ on the pair-wise ranking with both ID (UNIFEED) and OOD (REWARDBENCH and HHHALIGNMENT) test sets. The best performance in each group is in bold and the second best one is underlined. Results marked with † for RewardBench are from Lambert et al. (2024). The other baseline results are obtained by testing this available model or API. We use a dotted line to distinguish between the discriminative and generative reward models. We report the average accuracy for RewardBench and HHH-Alignment sets in the “Avg.” column. Results on Generalization. We used a pair-wise response ranking task to evaluate the generalization capability of GRAM. Table 1 shows the results of GRAM and its baselines on both ID and OOD test sets. Firstly, the results here confirm the findings from Section 3.1, demonstrating that discriminative reward models are less effective than generative reward models in generalization, even when enhanced generalization methods are applied. Interestingly, GRAM also outperforms the discriminative reward model on the ID test set, underscoring the substantial improvement and generalization capability of GRAM in reward modeling. Furthermore, compared to LLM-as-a-Judge methods, 8B Generative RM (Baseline) achieves a competitive score, while GRAM shows a notable improvement, increasing the average score on the RewardBench from 80.0 to 85.1. This shows that relying only on prompt engineering in a suboptimal reward model, even with a strong LLM, is insufficient. This finding here is consistent with the result in Zhang et al. (2024). Additionally, compared to open-source reward models trained on large-scale, high-quality labeled data, GRAM demonstrates competitive performance. As shown in Figure 6, GRAM outperforms these open-source models as more fine-tuning data is used, achieving an average accuracy of 91.6 on the RewardBench. From the results, we also observe that GRAM underperforms compared to discriminative models on ID data, which may raise concerns about overfitting in the discriminative models rather than better generalization. However, this is not the case. First, the ID test set evaluates the model’s ability to learn human preferences from labeled data, and our goal is to excel in both ID and OOD tasks. As shown by the Figure 4: Performance of GRAM and its baselines on BoN sampling. We use proxy scores to assess preference learning and oracle scores to evaluate the generalization capability. “D-” and “G-” denote that the reward model is trained using discriminative and generative reward modeling frameworks. LLaMA-3.1-8B-Instruct results in Table 1, GRAM achieves the best OOD results and second-best ID results. Second, while GRAM underperforms relative to Discriminative $\mathrm { R M } { + } \mathrm { R }$ egularization on the LLaMA-3.1-8B-Instruct model, it outperforms both the Discriminative RM (Baseline) and Discriminative RM+Freeze, demonstrating GRAM’s strong performance. Additionally, we find that regularization’s effectiveness is model-dependent, as it performs worse than GRAM on the LLaMA-3.2-3B-Instruct model. # 4.4. List-wise Response Ranking In practice, multiple responses are typically generated for reranking. Given a list-wise test set $D _ { \mathrm { l i s t } } ^ { t } ~ = ~ \{ ( x ^ { t } , y _ { 1 } ^ { t }$ , $y _ { 2 } ^ { t } , \cdot \cdot \cdot , y _ { k } ^ { t } ) \}$ , where $k$ denotes the list size, we begin by randomly selecting a response $y _ { j } ^ { t }$ as the reference response $y _ { \mathrm { r e f } }$ . We then compute reward scores $\{ r _ { \phi } ( x ^ { t } , y _ { 1 } ^ { t } )$ , $\boldsymbol { r } _ { \phi } ( x ^ { t } , y _ { 2 } ^ { t } ) , \cdot \cdot \cdot , \boldsymbol { r } _ { \phi } ( x ^ { t } , y _ { k - 1 } ^ { t } ) \}$ for the remaining responses via Eq. 3. These scores are subsequently used for ranking these responses3. Additionally, when the goal is to find the best response from the response list, a straightforward linear search approach can be employed. Specifically, we start by defining $y _ { 1 } ^ { t }$ as the best response $y _ { b } ^ { t }$ and comparing it iteratively with the remaining responses with the generative reward model. At each comparison, if $y _ { b } ^ { t }$ is found to be inferior, it is replaced by the compared response. Through this process, we can determine the best response. To support parallel computation and enhance efficiency, we also incorporate optimization algorithms, such as divide-and-conquer. Task Setups. We used best-of- $n$ (BoN) sampling to evaluate GRAM on list-wise ranking. We performed BoN sampling on the LLaMA-3.1-8B-Instruct model using $k$ responses per input. The test set was AlpacaEval2 (Li et al., 2023). In all BoN experiments, we trained a proxy reward model on a $4 0 \mathrm { k }$ subset of the Unified-Feedback dataset to provide a proxy score for the responses selected by GRAM and its baselines. Additionally, we trained an oracle reward model using preference data from AlpacaFarm (Dubois et al., 2023), which accurately measures response quality to assess generalization, as AlpacaFarm’s preference data is distributed alongside AlpacaEval2. Following Gao et al. (2023)’s work, we varied the KL Divergence between 0 and 4.5, which corresponds to a range of $k$ from 1 to 244 responses, according to the equation KLBoN = log k − kk− . Results of Best-of- $\mathbf { \nabla } \cdot \boldsymbol { n }$ Sampling. Figure 4 presents the BoN sampling results for reward models of the 3B and 8B sizes. When comparing discriminative and generative reward models, we observe that the discriminative reward model yields a strong proxy score but underperforms in oracle scores. This indicates that while the discriminative reward model exhibits robust preference learning, its generalization capability is weaker, consistent with observations in pair-wise ranking. In contrast, GRAM excels in list-wise ranking in both proxy and oracle reward model evaluations. We observe a decline in oracle scores for baseline models when the KL divergence exceeds 3, attributable to over-optimization. However, GRAM mitigates this issue, demonstrating its potential as a reliable foundation reward model in RLHF. We also further evaluate its performance during PPO fine-tuning, as shown in Appendix C.1. # 4.5. Reward Model Adaptation The adaptability of a reward model is crucial for its performance across various tasks, as it enables the model to effectively adjust to different environments and preferences (Cheng et al., 2023; Wang et al., 2024a). To evaluate GRAM’s adaptability, we conducted experiments in two distinct tasks: adapting to the summarization task and adapting to the harmlessness preference type. For each task, we fine-tuned GRAM on a small, labeled dataset containing task-specific preference data, followed by testing it on the corresponding task-specific test sets. Figure 5: The performance of reward models fine-tuned with varying amounts of task-specific preference data (summarization and harmlessness). Please refer to Figure 11 for the results on the four remaining baselines, including D-Feeze, DRegularization, G-Freeze, and G-Label Smoothing. Figure 6: Performance scaling laws for different amounts of unlabeled data used in the first stage. “0k unlabeled data” refers to training GRAM solely in the second stage, without using any unlabeled data for pre-training. Task Setups. For each task, we vary the number of summarization data across $\{ 0 \mathbf { k } , 1 \mathbf { k } , 3 \mathbf { k } , 5 \mathbf { k } , 7 \mathbf { k } , 1 0 \mathbf { k } \}$ , derived from preference data labeled by Stiennon et al. (2020) and Bai et al. (2022), respectively. We used the full task-specific datasets—92k samples for summarization and 42k for harmlessness—to train reward models, respectively, which served as baselines (Oracle RM). We also trained reward models based on the LLaMA-3.1-8B-Instruct and LLaMA-3.2-3BInstruct models, using only the task-specific preference data as baselines (denoted as Vanilla RM). Results on Reward Model Adaptation. Figure 5 shows the accuracies of the reward models, which are fine-tuned with different amounts of summarization and harmlessness preference data. We see that fine-tuning GRAM with a small amount of preference data, such as 1k or $3 \mathrm { k }$ samples, is sufficient to yield high-quality reward models. Notably, using only $3 \mathrm { k }$ summarization samples, we achieve a taskspecific reward model that performs comparably to one trained on the 92k samples (75.6 vs. 77.8). This proves that GRAM can substantially reduce the need for preference data labeling in reward modeling. Furthermore, compared to baselines, GRAM consistently outperforms them as a foundation reward model, underscoring its efficiency in adapting to task-specific requirements with minimal data. More experimental results can be found in Appendix C. # 5. Analysis # 5.1. Scaling Unlabeled Data for Improved Performance To further investigate the impact of pre-training with unlabeled data on GRAM’s performance, we trained GRAM using the LLaMA-3.1-8B-Instruct and LLaMA-3.2-3BInstruct models with varying amounts of unlabeled and labeled data. The model’s performance was evaluated on the OOD test set (RewardBench), as shown in Figure 6. The results demonstrate that as the amount of unlabeled data increases, the accuracy of GRAM generally improves for both models, with the most significant gains observed when moving from 0k to $2 0 0 \mathrm { k }$ unlabeled data. This also demonstrates the crucial role of unlabeled data and the scaling effect on performance, suggesting that using larger unlabeled datasets can lead to better reward models. Table 2: Accuracy $( \% )$ of different GRAM variants. # 5.2. Impact of Domain Difference Between Pre-training and Fine-tuning on Reward Model Adaptation We investigate the impact of domain differences on reward model adaptation. More specifically, we evaluate three variations of GRAM in the context of reward model adaptation: • GRAM: Pre-trained on $1 0 0 \mathrm { k }$ general unlabeled data (including summarization responses), followed by finetuning on 5k labeled summarization data. • GRAM w/ Domain: Pre-trained on $1 0 0 \mathrm { k }$ unlabeled summarization response pairs derived from TL;DR comparison data (Stiennon et al., 2020), followed by fine-tuning on $5 \mathrm { k }$ labeled summarization data. • GRAM w/o Domain: Pre-trained on $1 0 0 \mathrm { k }$ general unlabeled data (excluding summarization-related data; specifically, preference data related to summarization is filtered out using $\mathsf { G P T - 4 o } )$ , followed by fine-tuning on $5 \mathrm { k }$ labeled summarization data. The experimental results are listed in Table 2. These results demonstrate that pre-training on data more closely aligned with the target domain leads to better performance in that domain. Specifically, as shown in the table, GRAM pre-trained with domain-specific data (GRAM w/ Domain) achieves an accuracy of 74.7, significantly outperforming the model without pre-training (RM w/o Pre-training), which achieves only 56.5. This observation aligns with the common practice in LLMs, where incorporating domain-specific data during pre-training typically improves performance on downstream tasks. Furthermore, our results show that the pre-training approach exhibits strong robustness. Even with significant domain differences, pre-training still contributes positively to performance, with GRAM (71.6) outperforming the nonpre-trained model and the GRAM w/o Domain (67.4). See more analysis in Appendix D. # 6. Related Work Reward Modeling. Reward models, trained on human preference data, are central to RLHF or other alignment approaches like reject sampling (Lee et al., 2021; Chu et al., 2023). More recently, researchers have extended the use of reward models beyond training and into inference (Wu et al., 2024; Li et al., 2025). Two strands of research have tried to improve these reward models for better LLM alignment. The first focuses on large-scale, high-quality training data, developing either task-specific datasets (Stiennon et al., 2020; Xu et al., 2024) or more general preference datasets (Bai et al., 2022; Cui et al., 2023). The other explores stronger models for reward modeling, such as reward model ensembling (Coste et al., 2024; Min et al., 2024). Although reward modeling through these methods captures human preferences effectively, they often rely heavily on labeled data. Researchers have noticed this issue. For example, Lee et al. (2023) employed LLMs to replace human annotators, and Cui et al. (2023) developed a large-scale preference dataset for general-purpose use. However, these efforts overlook the potential of vast amounts of unlabeled data. Foundation Models. This work joins a large body of work demonstrating that a neural network trained on unlabeled data at scale can gain some general knowledge and is easy to adapt to a wide range of downstream tasks (Moor et al., 2023; Xiao & Zhu, 2025; 2023). Such a guiding principle has motivated the development of many successful LLMs, such as the BERT and GPT series (Devlin et al., 2019; Brown et al., 2020). Here we extend this idea to training reward models, though the training scale is much less than that of LLMs. The result of this work is somewhat unsurprising but encouraging: the broad effectiveness of foundation models can be verified in more research areas, and it shows that models of this kind can be successfully applied in fields that traditionally rely on highly specialized models.
In aligning large language models (LLMs), reward models have played an important role, but are standardly trained as discriminative models and rely only on labeled human preference data. In this paper, we explore methods that train reward models using both unlabeled and labeled data. Building on the generative models in LLMs, we develop a generative reward model that is first trained via large-scale unsupervised learning and then fine-tuned via supervised learning. We also show that by using label smoothing, we are in fact optimizing a regularized pairwise ranking loss. This result, in turn, provides a new view of training reward models, which links generative models and discriminative models under the same class of training objectives. The outcome of these techniques is a foundation reward model, which can be applied to a wide range of tasks with little or no further fine-tuning effort. Extensive experiments show that this model generalizes well across several tasks, including response ranking, reinforcement learning from human feedback, and task adaptation with fine-tuning, achieving significant performance improvements over several strong baseline models.
[ "cs.CL", "cs.AI" ]
# I. INTRODUCTION Ego-motion estimation is critical for autonomous navigation [1], using either proprioceptive sensors (e.g., odometers, IMUs) or exteroceptive sensors (e.g., cameras, LiDAR, radar). While proprioceptive sensors offer reliable short-term odometry, they accumulate drift without external correction. In GNSS-denied environments like indoors, tunnels, or urban canyons, autonomous systems must rely on proprioception, often fused with complementary sensing modalities. Exteroceptive sensors, like cameras and LiDAR, provide rich spatial information that aids in accurate mapping and localization. However, these sensors suffer in adverse weather or low-visibility conditions, such as heavy rain, fog, or dust. In contrast, radar sensors are known for their robustness under such conditions. Recent advancements in high-resolution millimeter-wave (mmWave) radar technology have enhanced its applicability, enabling robust ego-motion estimation even when optical sensors fail [2], [3]. Radar sensors provide Doppler velocity measurements of surrounding targets, which can be leveraged for ego-motion estimation. Some approaches focus on instantaneous ego-velocity estimation from a single radar scan [4], avoiding the need for feature detection and tracking, which are commonly required by registration-based methods [5], [6]. However, single-scan radar-based methods assume that all detected targets are stationary, requiring outlier rejection methods such as RANSAC. These assumptions may not hold in highly dynamic environments, leading to degraded accuracy [7]. Most radar-based ego-motion methods rely on radar point clouds generated through traditional multi-step processing pipelines, which may lead to reduced data richness or resolution. Our approach addresses these limitations by operating directly on raw, complex-valued radar data rather than relying on pre-processed radar point clouds. Specifically, we introduce a complex-valued deep neural network (CVNN) [8] that estimates translational ego-velocity directly from complex I/Q (inphase and quadrature) ADC (analog-to-digital converter ) data. The CVNN preserves phase information, which is typically lost in magnitude-based representations, allowing it to better capture motion-related features. A key aspect of our method is uncertainty quantification. Unlike conventional networks that provide only a point estimate, our CVNN also predicts an associated uncertainty, which models the heteroscedastic [9] noise inherent in radar measurements. This uncertainty-aware output is crucial for sensor fusion, as it allows the system to appropriately weigh the reliability of radar-based velocity estimates. To further improve robustness, we integrate the CVNN’s velocity predictions with an Extended Kalman Filter (EKF), fusing radar data with IMU measurements. The EKF plays a crucial role in bias correction and adaptive sensor fusion: it dynamically adjusts its measurement covariance matrix based on the predicted radar uncertainty and the observed IMU noise characteristics. In practice, when the CVNN reports high uncertainty, the EKF down-weights the radar measurements, relying more on IMU predictions. Conversely, when radar confidence is high, it is given greater importance in the fusion process. This adaptive covariance adjustment ensures that each sensor’s contribution is dynamically weighted according to its reliability at any given moment. The fusion of radar-based ego-velocity estimation with EKF enables more accurate high-frequency estimation. This integration is critical, as IMU-based methods alone suffer from drift over time, particularly in the absence of GNSS corrections. Our approach, therefore, not only leverages radar’s all-weather capability but also compensates for the long-term drift of IMU integration, leading to a highly robust and accurate ego-motion estimation system. # Our Contributions We introduce a novel complex-valued neural network that directly processes 3D raw radar complex signals to estimate instantaneous linear ego-velocity. This bypasses the traditional multi-step signal processing without losing informative features in the radar scans. Our method predicts both the ego-velocity and its associated uncertainty by modeling the noise-induced variability in radar measurements. This heteroscedastic uncertainty quantification enhances the reliability of velocity estimates, which is crucial for downstream applications such as sensor fusion. We integrate radar-based velocity estimates with IMU data using an EKF. The EKF leverages the learned uncertainty from the radar network to dynamically adjust measurement noise, mitigating IMU drift and bias accumulation over time while estimating 3D ego-velocity. We validate our method on the Coloradar dataset, achieving lower error in velocity component estimation in comparison to traditional instantaneous radar ego-velocity and scan-matching based methods. # II. RELATED WORK Recent advancements in high-resolution millimeter-wave radar sensors have increasingly attracted attention due to their robust performance under adverse weather and low-visibility conditions, where traditional optical sensors such as cameras and LiDAR typically fails [10], [11]. LiDAR-based [12] and camera-based odometry [13] methods have been extensively explored, achieving high accuracy in ego-motion estimation under favorable conditions. However, millimeter-wave radars offer significant advantages over optical sensors, especially in adverse visibility and weather conditions, prompting recent research into radar-based odometry and simultaneous localization and mapping (SLAM). Many radar odometry and SLAM methods have employed 2D spinning radars, utilizing frame-to-frame registration techniques [14]–[18]. Despite their high angular resolution $( < 1 ^ { \circ } )$ , spinning radars provide only two-dimensional data and lack Doppler velocity measurements. Automotive System-on-Chip (SoC) radars, on the other hand, offer limited fields of view but can provide both 2D and 3D target measurements along with Doppler velocities [19]. Typically, these radars produce sparse radar point clouds derived from processed raw data samples. Radar-based egomotion estimation using these radars falls into two main categories: classical registration methods inspired by LiDARbased techniques, such as Normal Distributions Transform (NDT) [20], and end-to-end neural network approaches [21], [22]. Additionally, some methods utilize full 3D heatmaps obtained directly from 3D MIMO radar [23], which represent signal intensities and Doppler velocities across range, azimuth, and elevation dimensions. This representation has been explored for 6-DoF ego-motion estimation using architectures combining 3D CNNs and transformers [6]. Another prominent category involves instantaneous egovelocity estimation from single radar scans using Doppler measurements. Since a single radar sensor measures only radial velocities, it inherently limits estimation to translational components. To achieve full ego-motion estimation, multiple radar sensors [24] or complementary sensors like IMUs are required [25], [26]. For instance, Kellner et al. [24] employed multiple radar sensors to estimate complete motion. Similarly, Do¨r et al. [25] fused radar-derived velocity estimates with IMU measurements using an EKF for complete 3D estimation. Another recent approach [26] integrated millimeter-wave radar measurements and IMU data through batch optimization over sliding windows. Several other recent studies [27], [28] have similarly explored radar-inertial odometry using advanced fusion and optimization techniques. Typically, dynamic targets in radar data are filtered using algorithms such as RANSAC, followed by least-squares optimization [24]. However, these filtering processes can discard valuable information and potentially introduce inaccuracies. Our approach differs from these previous methods by directly utilizing complex-valued ADC radar data without intermediate processing steps, as introduced by [29]. Specifically, we propose a novel complex-valued neural network architecture that predicts instantaneous 3D linear ego-velocity while quantifying uncertainty through covariance estimation. We then fuse these radar-based velocity and uncertainty estimates with IMU measurements using an EKF. This combination addresses the limitations posed by noise and biases in IMU data, ensuring accurate and reliable estimation of the complete ego-motion state. # III. PROPOSED METHOD Our method is loosely coupled fusion method which utilizes the advantages of Doppler radar measurements for linear egovelocity and IMU for angular ego-velocity resulting full egovelocity estimation. In this section, we will first introduce our CV-RDCNet (Complex Value - Range Doppler Channel Network) architecture and loss function then the fusion part which includes IMU model and fusion filter. # A. Radar CV-RDCNet for linear ego-velocity Given a radar input scan $S$ as shown in Fig. 1, our CVRDCNet as shown in Fig. 2 parameterised by weights $\pmb \theta$ predicts the instantaneous linear ego-velocity, represented by a mean vector $\hat { \mathbf { y } } \in \mathbb { R } ^ { 3 }$ , and an associated covariance matrix Σ R3×3: $$ [ \hat { \mathbf { y } } , \pmb { \Sigma } ] = f _ { \pmb { \theta } } ( S ) , $$ where $\hat { \mathbf y } = [ \hat { V } _ { x } , \hat { V } _ { y } , \hat { V } _ { z } ] ^ { \top }$ is the predicted mean ego-velocity vector, and $\pmb { \Sigma }$ quantifies the aleatoric uncertainty associated with these predictions. Raw radar data (ADC) is represented in complex form, containing phase and frequency information distributed across samples, chirps, and receiver channels. Each dimension respectively represents range bins, Doppler bins, and $\mathrm { T x } { \mathrm { - } } \mathrm { R } { \mathrm { x } }$ channels as shown in Fig 1. To effectively manage these Fig. 1: TI millimeter-wave (AWR2243) cascade radar signal processing pipeline illustrating the conversion of raw ADC data into a 3D complex-valued data cube via 3D FFT. Our method directly utilizes this minimally processed complex-valued tensor (highlighted block), which encodes range, Doppler, and angular information, as input to the proposed CV-RDCNet. Fig. 2: Architecture of the proposed CV-RDCNet: a complex-valued convolutional network with attention mechanisms and residual connections for probabilistic ego-velocity estimation from radar data. Ground Truth (Vx,Vy,V2) √ 的田 (2048,1) (6,1) (3,3) Samples 8 中 Liketge 256 32 (3,1) (Loss) Rx-Tx Pair (V,V,V2) Doppler (256,16, 192) 田 ComplexConv3D ComplexBN+cReLU Spatial+ChannelAtt Skip Connection 2DConv Flatten Linear DeCholessiy Concatenation complex inputs, complex-valued neural networks have been proposed in the literature [30]. There is also significant work on high-resolution radar data processing [31] and de-noising techniques [32]. However, for applications such as 2D egovelocity based on complex-valued neural network and channel attention has been explored [33] which shows the significant improvement. Based on the proposed method, we use the CVRDCNet for 3D ego-velocity estimation using single radar scan and fused the output with IMU. Our network comprises a feature extractor and a state estimation block as illustrated in Fig 2. The feature extractor in our network includes convolutional residual blocks that feature dual residual complex convolutions enhanced by attention modules. Our end-to-end network processes the complex data cube S (As shown in Fig. 1). The extracted features are then flattened and passed through the state estimation block. The feature extractor contains a convolution group with three Complex Residual Blocks (CRBs) followed by a complex convolution layer. Each CRB incorporates a complex-valued convolution, followed by a batch normalization layer, a ReLU activation, and both channel and spatial attention mechanisms. 1) Complex Residual Block: To extract multi-scale features from the radar data, denoted as $S ^ { R \times D \times A }$ , where $R , D$ , and $A$ represent the range bins $R = 2 5 6 ) ,$ ), Doppler bins $D = 1 6$ ), and angle bins $A = 1 9 2$ ) respectively, the following steps are implemented within our network: $$ F 1 = C o m p l e x R e L U ( C o m p l e x B N ( C o m p l e x C o n v ( S ) ) ) $$ In Equation 2, the raw data $S$ is first passed through a complex convolution layer (ComplexConv), followed by complex batch normalization (ComplexBN), and then activated using the ComplexReLU function. This process initiates the feature extraction by transforming the input data into a more abstract representation. $$ F 2 = C o m p l e x B N ( C o m p l e x C o n v ( F 1 ) ) $$ Equation 3 further processes the features $F 1$ from the previous layer through another complex convolution layer, and the output is normalized using complex batch normalization. This step enhances the stability and efficiency of the network by standardizing the features before they are further processed. $$ F 3 = S p a t i a l A t t e n t i o n ( C h a n n e l A t t e n t i o n ( F 2 ) ) $$ In Equation 4, an attention mechanism (Spatial $^ +$ Channel) is applied to $F 2$ , which allows the network to focus on the most informative features by weighting them based on their significance in the ego-velocity estimation. We use spatial attention on the feature maps (Doppler, Channels) and channel attention on the samples dimension. Moreover, each complex-valued residual block in the network incorporates a skip connection. This means that the output of each block is concatenated with its input before being passed to the subsequent blocks. This architecture choice helps to mitigate the vanishing gradient problem during training by allowing gradients to flow directly through the network layers, thus enhancing the learning and convergence of the network [34]. The network is designed to effectively handle the complexvalued input from radar scans, ensuring robust feature extraction for subsequent processing stages. 2) Linear Ego-motion Prediction with uncertainty: The features extracted from the radar scan are first down-sampled using strided convolutions to reduce the spatial dimensions of the feature maps for computational efficiency. The downsampled features passed through a FC (fully connected layer) after flattening to predict the mean of the linear ego-velocity components, $V _ { x } , V _ { y }$ , and $V _ { z }$ . In parallel, a second FC head predicts the parameters required to construct a covariance matrix that represents the uncertainty [9], [35] in these predictions. Formally, let $F _ { 3 }$ denote the feature maps produced by the preceding layers. The processing is defined as: $$ \hat { \bf y } = \mathrm { L i n e a r } _ { \mu } ( f l a t t e n ( A b s ( S t r i d e d C o n v _ { 1 \times 1 } ( F _ { 3 } ) ) ) ) $$ and the covariance matrix is reconstructed as: $\Sigma = C o n s t C o v ( \operatorname { L i n e a r } _ { \Sigma } ( f l a t t e n ( A b s ( S C o n v _ { 1 \times 1 } ( F _ { 3 } ) ) ) ) )$ (6) where: $\mathrm { L i n e a r } _ { \mu } ( \cdot )$ is the FC head that predicts the mean vector yˆ R3, $\mathrm { L i n e a r } _ { \Sigma } ( \cdot )$ is the FC head that outputs six parameters from which the covariance matrix $ { \Sigma } \in { \mathbb { R } } ^ { 3 \times 3 }$ is reconstructed using a Cholesky decomposition, $C o n s t C o v ( \cdot )$ denotes the transformation from the six predicted parameters to the full covariance matrix. $S C o n v ( \cdot )$ is the strided convolution layer. Here, the predicted mean $\hat { \mathbf { y } }$ represents the estimated linear ego-velocity components, and the covariance matrix $\pmb { \Sigma }$ provides a measure of aleatoric uncertainty [9], [36] in its predictions. 3) Loss Function: Our network predicts a mean vector $\hat { \textbf { y } } \in \mathbb { R } ^ { 3 }$ and a covariance matrix $ { \Sigma } \in { \mathbb { R } } ^ { 3 \times 3 }$ for the three components of linear ego-velocity (i.e., $V _ { x }$ , $V _ { y }$ , and $V _ { z }$ ). We assume that the ground truth y follows a multivariate Gaussian distribution: $$ \mathbf { y } \sim \mathcal { N } \left( \hat { \mathbf { y } } , \pmb { \Sigma } \right) . $$ The corresponding negative log-likelihood (NLL) loss is given by: $$ \mathcal { L } _ { \mathrm { { N L L } } } = \frac { 1 } { 2 } \log | \Sigma | + \frac { 1 } { 2 } \left( \mathbf { y } - \hat { \mathbf { y } } \right) ^ { \top } \Sigma ^ { - 1 } \left( \mathbf { y } - \hat { \mathbf { y } } \right) $$ A small constant $\epsilon$ is added to the diagonal of $\pmb { \Sigma }$ to ensure numerical stability. In order to promote meaningful uncertainty estimates and to stop covariance values to getting too smaller, we include additional diagonal regularization. Diagonal Regularization: This term penalizes overly small variances, ensuring that the diagonal elements of $\pmb { \Sigma }$ do not become arbitrarily small [37]: $$ R _ { \mathrm { d i a g } } = \lambda _ { 1 } \mathbb { E } \left[ \frac { 1 } { \mathrm { d i a g } ( \Sigma ) + \epsilon } \right] , $$ where $\lambda _ { 1 }$ is a regularization coefficient. The final loss function is a weighted sum of these components: $$ \begin{array} { r } { \mathcal { L } = \mathcal { L } _ { \mathrm { N L L } } + R _ { \mathrm { d i a g } } } \end{array} $$ # B. IMU Kinematics and Fusion Filters In this section, we present the modeling approach for the IMU and describe how its measurements are fused with the ego-velocity output of the trained CV-RDCNet model (Fig. 3a). Accurate motion estimation is essential for autonomous navigation, particularly when combining data from different sensor modalities. Since inertial sensors such as IMUs are prone to bias and noise, especially over time, estimating these biases is crucial for maintaining the accuracy and reliability of the overall system. To address this, we adopt an Inertial Navigation System (INS) framework that includes bias modelling. The core equations governing the INS mechanism are outlined below: $$ \begin{array} { r l } & { \dot { \mathbf q } _ { I } ^ { W } = \frac { 1 } { 2 } \Omega ( \omega _ { S } - \mathbf b _ { g } ) \mathbf q _ { I } ^ { W } , } \\ & { \dot { \mathbf v } _ { W } = \mathbf { R } _ { I } ^ { W } ( \mathbf a _ { S } - \mathbf b _ { a } ) - \mathbf g ^ { W } , } \\ & { \dot { \mathbf b } _ { g } = w _ { b _ { g } , \mathrm { n o i s e } } , } \\ & { \dot { \mathbf b } _ { a } = w _ { a _ { g } , \mathrm { n o i s e } } , } \end{array} $$ The state vector $\mathbf { x }$ for EKF is defined as: $$ \mathbf { x } = \left[ q _ { I } ^ { W } \quad \mathbf { V } _ { I } \quad \mathbf { b } _ { \omega } \quad \mathbf { b } _ { a } \right] ^ { T } $$ where: • $q _ { I } ^ { W }$ is the quaternion representing the orientation of the IMU in the world frame, and $\mathbf { R } _ { I } ^ { W }$ is the correspondent rotational matrix. $\mathbf { V } _ { I }$ is the linear velocity of the IMU in the world frame. • ${ \mathbf b } _ { g }$ is the bias of the gyroscope. ${ \mathbf { b } } _ { a }$ is the accelerometer bias. The term $\Omega ( \omega )$ is the quaternion multiplication matrix, which maps the angular velocity vector to the quaternion space. It is defined as: $$ \Omega ( \omega ) = \left[ \begin{array} { c c c c } { 0 } & { - \omega _ { x } } & { - \omega _ { y } } & { - \omega _ { z } } \\ { \omega _ { x } } & { 0 } & { \omega _ { z } } & { - \omega _ { y } } \\ { \omega _ { y } } & { - \omega _ { z } } & { 0 } & { \omega _ { x } } \\ { \omega _ { z } } & { \omega _ { y } } & { - \omega _ { x } } & { 0 } \end{array} \right] $$ This matrix is used to compute the derivative of the quaternion, ensuring proper propagation of IMU orientation. We define $$ \pmb { \delta } = [ w _ { b _ { g } , \mathrm { n o i s e } } , w _ { a _ { g } , \mathrm { n o i s e } } ] ^ { T } $$ # Algorithm 1 Estimation Algorithm # Initialize: $$ x _ { 0 } \sim \mathcal { N } ( \hat { x } _ { 0 } , P _ { 0 } ) $$ # Prediction: while no new measurement do $$ \begin{array} { r l } & { { { F } _ { k - 1 } } = \left. \frac { \partial f } { \partial x } \right| _ { x = \hat { x } _ { k - 1 } , \delta = \hat { \delta } _ { k - 1 } } } \\ & { { { L } _ { k - 1 } } = \left. \frac { \partial f } { \partial \delta } \right| _ { x = \hat { x } _ { k - 1 } } } \\ & { { { \hat { x } } _ { k } } = f ( \hat { x } _ { k - 1 } , \hat { \delta } _ { k - 1 } ) } \\ & { { { P } _ { k } } = { { F } _ { k - 1 } } { { P } _ { k - 1 } } { { F } _ { k - 1 } ^ { T } } + { { L } _ { k - 1 } } { { Q } _ { k - 1 } } { { L } _ { k - 1 } ^ { T } } } \end{array} $$ end while # Update: if new measurement $z _ { k }$ arrives then $$ \begin{array} { r l } & { H _ { k } = \frac { \partial h } { \partial x } \big | _ { x = \hat { x } _ { k } } } \\ & { \tilde { z } _ { k } = h ( \hat { x } _ { k } ) } \\ & { K _ { k } = P _ { k } H _ { k } ^ { T } ( H _ { k } P _ { k } H _ { k } ^ { T } + R _ { k } ) ^ { - 1 } } \\ & { \hat { x } _ { k } = \hat { x } _ { k } + K _ { k } ( z _ { k } - \tilde { z } _ { k } ) } \\ & { P _ { k } = ( I - K _ { k } H _ { k } ) P _ { k } } \end{array} $$ # end if as zero-mean, uncorrelated Gaussian white noise. In this paper, the world frame is defined as the East-North-Up (ENU) coordinate system and $g ^ { W }$ is the gravity vector in this frame. Since the measurements are the ego-velocities from the radar based on Fig. 3b we define the measurement equation as follows: $$ \mathbf { V } _ { R } ^ { W } = \mathbf { V } _ { I } + \mathbf { R } _ { I } ^ { W } [ \omega _ { I } ] ^ { \times } \mathbf { P } _ { R } ^ { I } $$ where $\mathbf { P } _ { R } ^ { I }$ represents the transform from the radar frame to the IMU frame. The Jacobian matrix of the IMU kinematic is shown as follows: $$ \begin{array} { c } { { \displaystyle F _ { k - 1 } = \left. \frac { \partial f } { \partial x } \right| _ { x = \hat { x } _ { k - 1 } , \delta = \delta _ { k - 1 } } } } \\ { { \mathrm { ~ ~ } = \left[ \begin{array} { c c c c c } { { \displaystyle \mathbf { 0 } _ { 3 \times 3 } } } & { { \mathbf { 0 } _ { 3 \times 3 } } } & { { \mathbf { - \bar { R } } _ { I } ^ { W } } } & { { \mathbf { 0 } _ { 3 \times 3 } } } \\ { { \displaystyle \mathbf { - [ \bar { \mathbf { R } } _ { I } ^ { W } \mathbf { a } _ { I } ] ^ { \times } } } } & { { \mathbf { 0 } _ { 3 \times 3 } } } & { { \mathbf { 0 } _ { 3 \times 3 } } } & { { \displaystyle - \bar { \mathbf { R } } _ { I } ^ { W } } } \\ { { \mathbf { 0 } _ { 3 \times 3 } } } & { { \mathbf { 0 } _ { 3 \times 3 } } } & { { \mathbf { 0 } _ { 3 \times 3 } } } & { { \mathbf { 0 } _ { 3 \times 3 } } } \\ { { \displaystyle \mathbf { 0 } _ { 3 \times 3 } } } & { { \mathbf { 0 } _ { 3 \times 3 } } } & { { \mathbf { 0 } _ { 3 \times 3 } } } & { { \mathbf { 0 } _ { 3 \times 3 } } } \end{array} \right] } } \end{array} $$ $$ L _ { k - 1 } = \left. { \frac { \partial f } { \partial \delta } } \right| _ { x = { \hat { x } } _ { k - 1 } } = { \left[ \begin{array} { l l l l } { { \overline { { \mathbf { R } } } } _ { S } ^ { W } } & { \mathbf { 0 } _ { 3 \times 3 } } & { \mathbf { 0 } _ { 3 \times 3 } } & { \mathbf { 0 } _ { 3 \times 3 } } \\ { \mathbf { 0 } _ { 3 \times 3 } } & { \mathbf { I } _ { 3 \times 3 } } & { \mathbf { 0 } _ { 3 \times 3 } } & { \mathbf { 0 } _ { 3 \times 3 } } \\ { \mathbf { 0 } _ { 3 \times 3 } } & { \mathbf { 0 } _ { 3 \times 3 } } & { \mathbf { I } _ { 3 \times 3 } } & { \mathbf { 0 } _ { 3 \times 3 } } \\ { \mathbf { 0 } _ { 3 \times 3 } } & { \mathbf { 0 } _ { 3 \times 3 } } & { \mathbf { 0 } _ { 3 \times 3 } } & { \mathbf { I } _ { 3 \times 3 } } \end{array} \right] } $$ and for Jacobin matrix for measurement model (Equation 15) the measurement matrix $H$ is calculated as follows: $$ \begin{array}{c} \begin{array} { l } { { \displaystyle H _ { k } = \left. \frac { \partial h } { \partial x } \right| _ { x = \hat { x } _ { k } } } } \\ { { \displaystyle ~ = \left[ - [ { \bf R } _ { I } ^ { W } { \bf v } _ { I } ] ^ { \times } \mathrm { ~ { ~ \bf ~ I ~ } ~ } \right.} } \end{array} ~ { \mathrm { ~ \bf ~ I ~ } } ~ { \displaystyle \qquad - { \bf R } _ { I } ^ { W } [ { \bf P } _ { R } ^ { I } ] ^ { \times } } \end{array} $$ Now, based on the linearized model of our system, we can use the EKF to estimate the linear velocity and biases of the IMU. For the sake of brevity, the estimation algorithm is described as pseudocode in Algorithm 1. TABLE I: Coloradar data sequences used in our training. # IV. EXPERIMENTS In order to evaluate the ego motion estimation performance of the proposed method, we first performed the preprocessing to prepare the input radar data and ground truth. # A. Data 1) Radar Data: We are using the publicly available Coloradar dataset [23], which includes the complex-valued raw ADC data cube from the TI-AWR2243 cascade radar sensor, along with LiDAR, IMU, and ground truth pose trajectories generated in various environments, as referred to in Table I. For preprocessing, we performed the 3D-FFT on the raw data to efficiently convert signals from the time domain to the frequency domain, enabling various critical functions such as frequency and phase analysis, range measurement, Doppler processing, and angle estimation, as shown in the processing pipeline in Fig. 1. The complex-valued data cube is used as input for training. It is a three-dimensional complex tensor with dimensions representing range, Doppler, and angular domains $\mathbf { \Gamma } _ { \mathrm { T X - R X } }$ channel dimension). We applied Min-Max normalization to the input complex-valued tensor to scale its values within a consistent range, ensuring stable and efficient training of the neural network. 2) Ground Truth Data: The dataset includes ground truth poses in the sensor rig frame at 10 frames per second (FPS) in ENU (East-North-Up) frame which we used to calculate the twists using method explained in [6] and we used the linear 3d ego-velocity components. For radar-based ego-motion estimation, these ground-truth poses must be in the radar sensor frame. Given that radar operates at a lower frequency (5 FPS), we match ground truth instances with radar timestamps and transform them from the body frame to the sensor frame using the static transform provided in the data set. # B. Training We implemented CV-RDCNet using complextorch [38]. A data sett of approximately 25,000 instances was used for training. We performed Min-Max normalization on the ground-truth ego-velocities with the preprocessed input data. The network was trained using Adam Optimizer [39], with a learning rate of $1 0 ^ { - 2 }$ and a batch size of 128. For training, we used negative log-likelihood loss III-A3. Training was carried out for 150 epochs with early stop based on validation loss to prevent overfitting. Batch normalization and dropout were used for stable training. # V. EXPERIMENTAL RESULTS AND ANALYSIS To validate the fusion algorithm, we performed an experimental analysis on the Coloradar data set. For evaluation, we selected the data sequences that were not part of the CVRDCNet training set. For each sequence, we report the mean squared error (MSE) and mean absolute error (MAE).Our fusion algorithm results are systematically compared with those of publicly available baseline approaches. (b) Coordinate Systems and Relative Positions of IMU and radar Fig. 3: Radar-Inertial Sensor Fusion Framework: (a) EKF model for estimating ego-velocity using radar-derived velocity from the proposed CV-RDCNet and IMU measurements to improve accuracy by compensating for the limitations of each sensor. (b) The coordinate system of the radar and IMU, showing their positions and transformations in the world frame for accurate motion estimation. CV-RDCNet $\mathbf { + \delta E K F }$ $( v _ { m } )$ : This method uses the mean velocity $( v _ { m } )$ estimated by CV-RDCNet as the measurement input for the EKF, which combines it with IMU data to estimate ego-motion. • CV-RDCNet $^ +$ EKF $( v _ { m } , \sigma _ { m } )$ : This method considers both the mean velocity $( v _ { m } )$ and the predicted covariance $( \sigma _ { m } )$ from the neural network. The predicted covariance represents an estimate of heteroscedastic aleatoric uncertainty, allowing the EKF to adjust the measurement confidence dynamically. Uncertainty-aware methods, such as CV-RDCNet $^ +$ EKF $( v _ { m } , \sigma _ { m } )$ , aim to improve sensor fusion by adjusting the measurement covariance dynamically instead of assuming a fixed noise model. In real-world scenarios, sensor reliability varies due to external factors such as occlusions, multipath reflections, and noise. Estimating uncertainty along with velocity should, in theory, result in better adaptability. Fig. 4: Comparison of estimated ego-velocity across different fusion methods. However, despite the expected benefits, the experimental results show that CV-RDCNet $^ +$ EKF $( v _ { m } )$ slightly outperforms CV-RDCNet $^ +$ EKF $( v _ { m } , \sigma _ { m } )$ . This can be explained by several factors: The EKF assumes Gaussian noise, but the uncertainty predicted by the neural network may not follow a Gaussian distribution. The predicted covariance accounts for epistemic uncertainty, which captures only part of the total uncertainty present in the system. The estimated uncertainty may not always be accurate, especially in complex and dynamic environments, leading to suboptimal weighting in the EKF. This highlights that while incorporating uncertainty is bene longboardrun2 sequence (comparison of fused ego-velocity) 0.5 -0.5 150 250 > 1.5 AH 0.5 CV-RDCNet-EKF(vm) ego-Velocity CV-RDCNet-EKF(v WA 200 250 0.4 0.2 100 150 250 ficial for modelling sensor behaviour, improper calibration and assumptions about the underlying uncertainty distribution can lead to suboptimal filtering performance, as discussed in [40]. # A. Comparison with Baseline Methods We have evaluated the CV-RDCNet along with fusion methods on the test sequences used for evaluation in Table II and compared with the two of existing methods. The results from the model without fusion as shown in Fig. 5, outperforms the existing models. We indicate that both proposed methods significantly improve ego-motion estimation compared to previous approaches: • RIO [41] package is publicly available which we ran on the processed radar scans. It produces higher errors across all the test sequences, showing the limitations of the previous method. • 4DEgo [6] which was end-to-end registration method performed on the heatmap data obtained after processing radar scan. It performs better than RIO but is still outperformed by both of our proposed methods. TABLE II: MSE $( \mathrm { m } / \mathrm { s } ) ^ { 2 }$ , MAE $\mathrm { ( m / s ) }$ , and $\sigma _ { m }$ (for fusion methods) for different test sequences and fusion algorithms (bes results in Bold). Fig. 5: CV-RDCNet predicted mean and sigma vs ground truth for longboardrun2 sequence. $( v _ { m } , \sigma _ { m } )$ , and CV-RDCNet $( v _ { m } )$ over time for the Longboard2 sequence.
We present a method for estimating ego-velocity in autonomous navigation by integrating high-resolution imaging radar with an inertial measurement unit. The proposed approach addresses the limitations of traditional radar-based ego-motion estimation techniques by employing a neural network to process complex-valued raw radar data and estimate instantaneous linear ego-velocity along with its associated uncertainty. This uncertainty-aware velocity estimate is then integrated with inertial measurement unit data using an Extended Kalman Filter. The filter leverages the network-predicted uncertainty to refine the inertial sensor's noise and bias parameters, improving the overall robustness and accuracy of the ego-motion estimation. We evaluated the proposed method on the publicly available ColoRadar dataset. Our approach achieves significantly lower error compared to the closest publicly available method and also outperforms both instantaneous and scan matching-based techniques.
[ "cs.RO", "cs.AI", "eess.SP" ]
# 1 Introduction Anonymization is widely regarded as a crucial tool for protecting privacy in an era of big data processing. Theoretically, it serves as a means to mitigate risks associated with the misuse of personal data by ensuring that individuals can no longer be identified. In practice, however, anonymization remains an imprecise science, often misunderstood and misapplied. Many datasets that are presented as anonymized continue to pose significant re-identification risks due to improper techniques or evolving technological capabilities. This gap between the intended function of anonymization and its real-world implementation has led to growing concerns about anonymity-washing — a phenomenon in which organizations claim to have achieved strong privacy protections through anonymization while failing to provide meaningful safeguards. Note that anonymity-washing is a specialized form of privacy-washing $^ { 5 }$ [19]. The General Data Protection Regulation (GDPR) establishes anonymization as a mechanism through which personal data can be rendered outside the scope of data protection laws. Recital 26 of the GDPR defines anonymization as the process by which data is “rendered anonymous in such a manner that the data subject is not or no longer identifiable.” However, the absence of clear, practical guidance on how to achieve this standard has resulted in inconsistent implementations and legal uncertainties. Many organizations either overestimate the effectiveness of their anonymization processes or struggle to comply due to conflicting regulatory interpretations. Additionally, courts have recognized that anonymization is never absolute — what is considered anonymous today may become identifiable tomorrow as technology advances. Despite the importance of anonymization, the regulatory and educational landscape remains fragmented and inadequate. On one end of the spectrum, legal guidelines provide highlevel definitions and compliance requirements but lack technical specificity. On the other end, academic research offers rigorous, mathematically grounded approaches to anonymization that are often inaccessible to practitioners who do not have advanced expertise in statistics or computer science. This disconnect has left engineers, data scientists, and policymakers without the necessary tools to implement anonymization effectively. The result is widespread reliance on outdated or insufficient methods—such as k-anonymity and l-diversity—that have been repeatedly shown to fail against modern re-identification attacks [46]. Furthermore, anonymity-washing is exacerbated by inconsistent regulatory interpretations across jurisdictions. The European Union has exercised significant global influence on data privacy regulation, with many countries modelling their laws after the GDPR. However, even within the EU, national data protection authorities and courts have issued conflicting opinions on what constitutes effective anonymization, leading to uncertainty among organizations attempting to comply. Beyond Europe, frameworks such as the United States’ de-identification standards under the Health Insurance Portability and Accountability Act (HIPAA) and the California Consumer Privacy Act (CCPA), Japan’s Act on the Protection of Personal Information (APPI), and emerging guidelines such as the Brazilian General Data Protection Law (LGPD) further demonstrate that approaches to anonymization lack uniformity at the international level, making cross-border data governance exceedingly complex. Another critical factor enabling anonymity-washing is the lack of accessible educational resources for practitioners. Engineers and software developers responsible for implementing anonymization frequently lack adequate training and rely on either high-level legal guidelines or complex, research-oriented papers that do not offer practical guidance. Several regulatory bodies and experts have called for clearer standards, including the European Data Protection Board (EDPB), national data protection authorities (such as the National Commission for Information Technology and Civil Liberties in France (CNIL) and the Federal Commissioner for Data Protection and Freedom of Information (BfDI) in Germany), and research institutions. Yet, despite these calls for action, practitioners continue to report difficulties in accessing concrete, actionable information on how to apply anonymization techniques effectively. In light of these challenges, this paper argues that ambiguities in regulatory guidance, outdated technical approaches, and gaps in practitioner education may lead to anonymity-washing. While prior works have addressed specific aspects of the problem—such as legal critiques of anonymization under data protection law [83,90,75,16], technical limitations of anonymization techniques [1,24,65,46,40,26], or even highlighting key misunderstandings [37] —these contributions offer only a partial view of the broader landscape. In contrast, our work provides a comprehensive analysis of the multiple, interrelated issues underlying anonymity-washing. We expand on the existing literature by integrating a wide range of sources, including legal cases, regulatory interpretations, and technical guidelines, while offering a systematic critique of technical documentation. Furthermore, we provide an international perspective that, to our knowledge, has not been previously compiled in a single work. First, in Section 2 we introduce the concept of anonymity-washing and situate it within the broader landscape of privacy discourse. In Section 3, we examine the legal foundations and the regulatory ambiguity surrounding anonymization. Next, Section 4 presents an overview and critique of technical guidelines and educational resources, highlighting the gaps practitioners face. Section 5.1 explores the practical implications of anonymity-washing, including legal cases and implementation failures. Finally, Section 6 offers recommendations and concluding reflections on how to address the risks of anonymity-washing through clearer guidance and improved institutional coordination. # 2 Contextual Elements # 2.1 Anonymization terminology The anonymization landscape is complex, with multiple laws advocating for different requirements. But many points of contention stem from the terminology surrounding the topic of anonymization. To begin, it is interesting to look at the terminology developed by the International Organization for Standardization (ISO), as it constitutes the main standard-setting body with international influence. Several ISO standards touch on the topic of data anonymization. These global standards have recognized the importance of anonymization in various contexts. ISO/IEC 29100:2024(en) establishing a common privacy terminology, defines anonymization as a “[A] process by which personally identifiable information $( \dots )$ is irreversibly altered in such a way that a [data subject] (. . . ) can no longer be identified directly or indirectly, either by the PII controller alone or in collaboration with any other party.” The same document defines pseudonymization as a “[A] process applied to personally identifiable information (PII) (3.7) which replaces identifying information with an alias.” The other term, “de-identification”, is usually considered more neutral and broader than anonymization, although sometimes conflated with the latter[18,47]. Indeed, according to ISO, “de-identification” refers to “[A] process of removing the association between a set of identifying attributes (3.14) and the data subject (3.4).”6. It results that anonymization implies the highest degree of privacy, while the more specific process of pseudonymization is a step below anonymization in terms of re-identifiability. In contrast “de-identification” is the general term describing the process through which data is made confidential7. While some jurisdictions mostly follow the ISO terminology, others, unfortunately, do not [2,94]. An example is the fact that the term “de-identification” is not even used within the EU’s GDPR, while several important US instruments, such as the HIPAA [93] and the CCPA [17] use it in place of anonymization. In the same vein, Nigeria and Malawi’s Data Protection Acts do not use the term “anonymization”, despite referring to both “de-identification” and “pseudonymization” in their statutes [2]. In contrast, Japan’s Act on the Protection of Personal Information, much like the EU, does not refer to de-identification. Finally, a cursory look at the relevant literature in social science reveals that authors themselves appear to have subscribed to different terminologies [46,18]. Beyond word choice, there seems to be no equivalence between the terms when they are used to refer to data records that have undergone the appropriate treatment to exempt data controllers and processors from their obligations under data protection laws. That is to say, the tolerance level towards identifiability tends to vary across jurisdictions [2]. Discrepancies sometimes exist within a single legal system, as in the US, where re-identifiability tolerance may vary depending on the nature of the data contained in a record, and the projected use of the record [47]. In the EU, the situation is no less confusing, as “anonymization” suffers from conflicting interpretations [40] (see details in Section 3). # Take-away These variations and inconsistencies make it difficult for practitioners to understand and determine the required level of protection, hindering the understanding and adequate application of anonymization techniques. # 2.2 “Anonymity-washing” as the misrepresentation of actual confidentiality levels Due to interpretative instability, the terms that compose the anonymization terminology should not necessarily be taken at face value. Not only are practitioners affected by the confusion in the terminology, but individuals are affected as well, as they may put more trust in information processes than they should. In order to better understand this effect, we must look at interpretations of privacy-washing. In the course of an analysis on questionable data practices of tech industry giants, Girucci gives the following definition [19]: “The purposeful conflation of security with privacy, the disregarding of more granular definitions of privacy (social vs. institutional privacy as well as data types including explicit, implicit, aggregated, and inferred), and a general reliance on offline privacy expectations that are no longer applicable to online spaces.”. Despite its provocative tone, the term privacy-washing is more than a mere rhetorical device. Indeed, privacy-washing can accurately describe situations where data privacy guarantees deviate from the standards to which the concerned entities purportedly committed. Evidently, the concept of privacy-washing is broad: it can cover a variety of subjects like cybersecurity and third-party data sharing. This paper is focused on privacy-washing in the anonymization context because deceptive privacy representations in this context are highly likely. In fact, while the anonymization vocabulary taken at face value is unambiguous, it does little to convey the actual fragility [75] of current anonymization methods: “The way companies and the media talk about de-identified data matters, and data holders regularly play fast and loose with the concept of anonymity. The terms “anonymous” and “anonymization” simply over-promise. They create expectations of near-perfection and lull people into a false sense of security” [83]. Data controllers could be tempted to exploit the complexity within current data privacy terminology to mislead data subjects regarding the safety and confidentiality of their data, resulting in anonymity-washing. In essence, anonymitywashing refers to situations involving the misrepresentation of anonymity levels of a data record. Recently, the Federal Trade Commission (FTC), as the main agency dealing with consumer protection in the US, dealt with anonymitywashing cases. In a recent communication, the FTC warned that unwarranted claims of anonymity could constitute deceptive consumer practices, reiterating that pseudonymous identifiers in the form of hashing do not constitute anonymization, as some businesses have claimed: “Companies should not act or claim as if hashing personal information renders it anonymized. FTC staff will remain vigilant to ensure companies are following the law and take action when the privacy claims they make are deceptive”[25]. Remark, how this highlights the manipulative aspect of anonymity-washing.8 In the EU, potential anonymity-washing cases have been scrutinized by data protection authorities, and some practices have been challenged in Court. For instance, the Italian Data Protection Authority (Garante) recently sanctioned the Italian National Institute of Statistics for its failure to deploy the necessary measures to avoid re-identification of the data it used for statistical analysis. In its order, the Italian authority explained [48]; “Simply having organizational measures or ethical codes is not enough to satisfy data protection principles.” In this case, data controllers claimed to have upheld data protection principles while the data subjects remained, in fact, easily re-identifiable from their data records. The question of intentionality behind deceitful anonymity statements deserves a brief focus, as the term “washing” implies an intentional action. Except that intention in this context can be difficult to prove. Sometimes, anonymity-washing cases are so blatant that the willingness to deceive leaves no doubt. Other times, anonymity-washing is harder to prove and therefore appears incidental, giving the impression that data controllers and/or processors are acting in good faith while deploying weaker solutions. There is, of course, a risk of mischaracterization. Still, it may never be possible to prove with a high degree of confidence that a data controller and/or processor acted in good faith, since defendants are likely to claim to be acting in good faith when notified, and in the course of legal proceedings. # Take-away Anonymity-washing is a subset of privacy-washing, which refers to the misrepresentation of the anonymity level of data. The phenomenon is exacerbated by several factors, including unclear terminology. # 3 Overview of regulatory guidance on data anonymization On 25 July 2024, the European Commission published its second report on the implementation of the GDPR [44]. One of the key issues highlighted in the report is the persistence of differing interpretations among national data protection authorities, which undermines the uniform application of the GDPR. This discrepancy gives rise to legal uncertainty, thus businesses are confronted with divergent administrative requirements across different Member States. In this regard, the Commission seeks to reiterate its request, previously made in 2020 [35], to support practitioners by providing clearer guidance and materials to facilitate GDPR compliance. This issue is particularly pertinent in the context of anonymity washing, as the Commission has reported in [36]: “Some stakeholders also consider that certain data protection authorities and the Board adopt interpretations that deviate from the risk-based approach of the GDPR, [and] (. . . ) mention as areas of concern: (i) the interpretation of anonymization; (. . . )”. [As a result, the report] “underline[s] the need for additional guidelines, in particular on anonymization and pseudonymization $( \dots , )$ ”. # 3.1 EU regulations The abrogation of Directive 95/46/EC [33] (Data Protection Directive or DPD) and the adoption of the GDPR did not affect anonymization. This is confirmed by the endorsement of the Working Party 29’s (WP29) Opinion 5/2014 on anonymization by the European Data Protection Board (EDPB), which is still in the process of preparing an updated version [88]. In its Opinion 5/2014 on anonymization [78], WP29 recalls the ISO definition of anonymization $^ { 9 }$ and that the simple removal of identifiers from personal data does not make the anonymization process irreversible. Account should be taken of all“reasonable means”(including computational power and technological evolution) to re-identify anonymous data. These points are addressed by Recital 26 of the GDPR, stating that : “Account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person, to identify the natural person directly or indirectly. To ascertain whether means are reasonably likely to be used to identify the natural person, account should be taken of all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments.”10 Before deciding on an anonymization method, an anonymization test must be performed to evaluate the risks (singling-out, linkability, and inference). WP29 provides an assessment of the guarantees and shortcomings of each technique from the two main families of anonymization (generalization and randomization) based on these risks. However, several research papers have shown that the analyses provided by WP29 have weak points, and they do not consider these techniques valid [24,84,69,29,87,70]11. More recent guidelines have been adopted by the EDPB on issues related to data anonymization, but they do not contain additional advice. On 17 December 2024, the EDPB published guidelines on the anonymization of AI models [38]. It states that whenever models are trained on personal data, they cannot be considered anonymous. The reason is that many studies on these models have demonstrated their capacity to “regurgitate” part of their training datasets [6]. The EDPB stated that a model is considered anonymous only when, based on appropriate documentation, personal data cannot be inferred either directly (by statistical inference, including the probabilistic functioning of the model) or indirectly (within a user’s prompt). If the risk of “regurgitation” of personal data persists, a deeper analysis is needed . Sénéchal criticizes the lack of a threshold of the risk of “regurgitating” personal data and the lack of a distinction between the different AI models in these guidelines in [92] (for example, the general-purpose AI models [76] and the ones posing systemic risks. This is problematic since anonymization is difficult to implement, especially with unstructured data [95] , which are essentially used to train general-purpose AI models [96]. Additionally, the question of whether the data can be separated from the model remains unanswered. It is also not known whether the anonymization of the model implies that of the data it contains. The EDPB also adopted on 16 January 2025, guidelines on pseudonymization [39], in which it recalls the GDPR definition set out in article 4(5). The Board stressed that, although pseudonymization secures data, whenever the reattribution of data to a natural person (by linking pseudonyms to additional data) remains possible, the GDPR applies. It recalls that even if the original data are deleted, pseudonymized data become anonymous only if all requirements are met. It is interesting to note that the guidelines do not provide further information on anonymization requirements. This is regrettable for two reasons: first, updated guidelines on anonymization have yet to be issued; and second, as the Spanish Data Protection Authority has pointed out, confusion between anonymization and pseudonymization remains a common misunderstanding among data controllers [37]. # Take-away Guidelines on anonymization need to be updated (as they have not been since 2014). The information provided by the EDPB on pseudonymization and anonymization of AI models does not resolve the contradictions of its previous guidelines and the practical difficulties controllers are confronted with when implementing anonymization protocols in real life. # 3.2 Anonymization regimes beyond the EU The uncertainties resulting from the changing interpretation within the EU undermine the so-called “Brussels effect”, when non-EU states take inspiration from the EU’s laws for building their own legal regime. Data flows often involve entities located in different jurisdictions, including non-EU countries [68,67]. Moreover, data protection laws usually have some extraterritorial effects, which means that multiple regimes are sometimes applicable simultaneously. Hence, it is vital to ensure that legal regimes on anonymization do not contradict each other. Yet, a survey conducted by the OECD in 2019 found that “uncertainty regarding legal privacy regimes” and “incompatibility of legal regimes” topped the list of the main challenges to cross-border data flows [74] Anonymization guidelines are present in data protection regimes across the globe. There are differences, however, in the approaches and the overall granularity levels exhibited by the relevant frameworks. Notably, some data protection regimes, such as in Japan [58] and the US [93], come with relatively detailed guidance on how to achieve the expected levels of anonymization and how to handle the data [85,5]. With regard to data transfer between the EU and the USA the previous Privacy Shield, which was adopted on the basis of the European Commission’s decision that the USA’s level of personal data protection was equivalent to that of the EU, [42], was replaced by a revised Privacy Framework, after the EU Court of Justice overturned it [60] In contrast, other regulatory frameworks, such as in Brazil[14], are not particularly prescriptive and require additional input. At the same time, several jurisdictions have initiated efforts to modernize their approaches to data protection, including anonymization. For example, Brazil’s national data protection authority, the ANDP, is set to clarify what measures could be implemented to ensure anonymity in accordance with its 2018 Lei Geral de Proteçao de Dados (LGPD) in the upcoming years [4]. A call for public participation in that effect has been published in early 2024. In 2023, the Data Security Council of India (DSCI) published a roadmap considering possible orientations for a national data anonymization regime [53]. At the intra-state level, in Québec, the Regulation respecting the anonymization of personal information was published in 2024 [82]. The text is very prescriptive and seeks to clarify the distinction between anonymity and pseudonymity, aligning with the EU’s view. In the EU, the EDPB is expected to publish new guidelines on anonymization later this year. It is expected that the new guidelines will fix the inconsistencies introduced by the WP29 Opinion 5/2014 on anonymization, thereby clarifying the dominant approach at the EU level [40]. Anonymization is still a maturing field. Valuable guidelines on anonymization are often released after the publication of the main body of law. Hence, recent data protection laws such as China’s Personal Information Protection Law, or India’s Digital Personal Data Protection Act, will need to be complemented with guidelines on anonymization [89,68]. Furthermore, while precise anonymization parameters are still not consistent across jurisdictions, the basic premises of anonymization law remain the same; anonymization levels may vary, and so are the obligations placed upon data handlers [68]. Whatever the approach, it seems that regulators are left with two choices: either leaving enough leeway for data handlers to determine for themselves which methods and policies would meet their expectations, or prescribing exactly which technical and organizational measures would meet their expectations. Both approaches have their merits and shortcomings. On the one hand, there is an inherent limitation on the degree of granularity that can be achieved in the law. Excessively precise regulations and guidelines may pose problems at the implementation stage and may prove to be overly restrictive, as has already been seen with the WP 29 Opinion 5/2014. The limited technical knowledge of regulators may constrain the formulation of highly detailed guidelines anyway. On the other hand, too much leeway could seriously undermine the purpose of data protection laws by increasing the likelihood that poorly anonymized data records will fall outside their scope. # Take-away Anonymization laws seem to have been evolving independently, with differing requirements and definitions. Whether all the ambiguities will be fixed and whether every actor will converge around the same interpretation remains to be seen. # 4 Overview and Critique of Guidelines # 4.1 Contradictory guidelines and uncertain standards within the EU Some authors consider that the EU Data Protection Law lacks a clear definition of anonymization [16]. Unlike pseudonymization, the GDPR fails to define anonymization in its Article 4 titled“definitions”. However, another Opinion of WP29 on the concept of personal data issued in 2007 [79] clarified the difference between anonymization and pseudonymization. Recalling ISO’s previous definitions, they explained that anonymization protects privacy, while pseudonymization represents a technical, reversible process. Nevertheless, data can still be considered anonymous, even when re-identification remains possible, but complementary measures to prevent re-identification are implemented. This flexible approach was not supported in the Opinion 5/2014 on anonymization [78], which applies together with the previous Opinion on the concept of personal data. In Opinion 5/2014, WP29 required the aggregation of data (into group statistics) and the destruction of raw data (identifiers) to ensure correct anonymization. Nevertheless, the objective still remained to prove that the likelihood of reidentification was negligible. On this point, the Commission’s guidelines [34] on the free flow of non-personal data suggest that it is often difficult to assess the effectiveness of an anonymization procedure. Indeed, besides many academic papers [3,46,29,70,1,28], even a study commissioned by the European Parliament’s ITRE Committee has shown that it is possible to re-identify supposedly anonymized data [45]. Moreover, national DPAs disagree on how to implement anonymization. The French DPA, CNIL, adopts the WP29’s approach to anonymization; other national DPAs are more flexible. For example, the UK’s ICO (Information Commissioner’s Office) states that [50]: “The DPA does not require anonymization to be completely risk free—you must be able to mitigate the risk of identification until it is remote. If the risk of identification is reasonably likely, the information should be regarded as personal data— (. . . ). Clearly, 100% anonymization is the most desirable position, and in some cases, this is possible, but it is not the test the DPA requires.”, and Ireland’s DPC (Data Protection Commission) writes[32]: “Organisations don’t have to be able to prove that it is impossible for any data subject to be identified in order for an anonymization technique to be considered successful. Rather, if it can be shown that it is unlikely that a data subject will be identified given the circumstances of the individual case and the state of technology, the data can be considered anonymous”. The CNIL’s guidelines on anonymization are of particular interest, as they establish a strict standard to determine whether data can be considered anonymous. They assert that data is anonymous only when it is impossible to re-identify the data subject. However, they recognise that when the risks of singling out, linkability, and inference are not met, data can be deemed anonymous if a subsequent analysis indicates a negligible risk of re-identification [21]. Unfortunately, the definition of what constitutes a negligible risk remains ambiguous. For these reasons, many controllers have given up on anonymization and prefer pseudonymization [16] even if pseudonymized data remain subject to the GDPR. The problem in this case is that “pseudonymized data are considered personal data, regardless of whether they are, or ever will be, in the hands of a person who holds the key needed for re-identification” [81]. The application of personal data regulation can be challenging, especially in the context of rapid data access, as evidenced by the case of health data (see also Section 5.1). # Take-away The lack of a clear, harmonized definition of anonymization across EU legal texts and among national authorities creates confusion for practitioners. This leads many to favour pseudonymization, even though it offers weaker privacy protection, with data remaining fully subject to GDPR. This regulatory ambiguity undermines the consistent implementation of and weakens trust in anonymization as a reliable privacy safeguard. # 4.2 Technical Documents While non-technical guidance often lacks the precision needed for implementation, technical documentation is not always more helpful. In several cases, companies have claimed they could not find clear guidance on how to anonymize data—a claim sometimes countered by DPAs pointing to existing documents [23]. However, our review shows that most guidelines are often hard to find (not available or ignored by DPAs) or not practically useful (entry-level). We reviewed the websites of the five most active EU DPAs (France, Austria, Ireland, Germany, and Italy). Most do not provide detailed technical materials: – CNIL (France) offers introductory guides [20,21], repeating WP29 content, and a clear (though potentially misleading) explanation of pseudonymization [22]. – Garante (Italy) provides an overview [49] to implement the GDPR, and mainly reiterates previous legal guidelines. – BfDI (Germany) has policy papers and speeches [7,8,11,12], but limited technical depth. [15] focuses on the importance of anonymization, [9] and [10] discuss risks of other issues related to personal data. – DSB (Austria) offers legal advice only. – DPC (Ireland) stands out with well-structured and clear guidance [31,32] on legal questions, however, it does not offer practical advice. Outside the EU, the UK’s ICO provides excellent guidance, including on stateof-the-art methods like differential privacy and other PETs [51,52]. However, some examples are oversimplified or technically wrong $^ { 1 3 }$ The UK Anonymization Network (UKAN) also offers practical tools, such as a decision-making framework that uniquely addresses attacker modelling [41]. However, most chapters remain general (entry-level), in contrast to the referenced DIS method that requires Master’s degree-level statistical knowledge. The anonymization guide of Singapore’s PDPC [80] is accessible and educative, guides the reader from data discovery to risk measures, giving informative examples; however, it is also entry-level and adds little beyond other existing material. Some statistical agencies provide additional resources. The National Institute of Statistics and Economic Studies in France (INSEE) offers slides and working papers [54,56], but most lack practical detail. A notable exception is Bergeat’s work [57] comparing and explaining experiments done by two anonymization software tools: $\mu$ -Argus and SDCMicro. It also gives plenty of citations, however, only to sources on statistics (no computer science references). It is aimed at statisticians, and it could serve as a continuation to other introductory materials, but the reader who already has at least a Bachelor’s degree in mathematics or statistics. Other guides, such as [55], focus on confidentiality rules within the French statistical service rather than techniques. Statistical documents only mention statistical tools and use a different language from that of computer scientists. This could, unfortunately, result in ignoring some state-of-the-art methods, such as differential privacy. A good example is the working paper [30] that details the anonymization process applied to a large French administrative database where the authors experimented with different methods including k-anonymity, all-m anonymity, and l-diversity. They mention that they have tried to apply DP, however, they abandoned the experiment due to a lack of expertise. Academic papers are another option, but they often assume advanced statistical or mathematical knowledge (Master’s or PhD level), making them inaccessible to many practitioners. Moreover, choosing appropriate methods from the literature is difficult without deep expertise, which may explain the frequent use of outdated or misapplied techniques in practice [56,57,66,30]. Some expert-written materials aimed at non-technical readers exist [97,72,71]; however, they are rarely cited in public or institutional guidance. Books Books on anonymization tend to target either high-level management (e.g. [27,86,73]) or technical researchers. Some, like [13], cover a broad range of privacy topics but lack methods for evaluating anonymization quality. Jarmul’s work [59] offers a more hands-on perspective, including differential privacy and privacy engineering workflows, making it useful for practitioners. Stallings’ book [91] is a strong general-purpose resource, well-suited for short training programs. # Take-away Most technical anonymization resources are either too simplistic or too involved, offering little practical use for professionals. Practical regulatory guidance is rare and often legalistic. This leaves practitioners with a fragmented landscape, outdated methods, and an incentive to abandon anonymization altogether. Bridging these gaps requires targeted, accessible, and technically sound educational materials. # 5 Inadequate Practices # 5.1 Confusions arising from the definition of personal data EU case law lacks clarity with regard to anonymization practices. The most relevant cases focus on clarifying the concept of personal data. However, the interpretations provided help to assess what an anonymized dataset is not. Moreover, the Court’s application of Recital 26 to real cases provides valuable insight into the question of whether data remains anonymous despite a residual risk of reidentification. In this regard, the General Court’s SRB vs EDPS decision is a good example [43]. SRB (Single Resolution Board) carried out an insolvency procedure against Banco Popular. Within this procedure, some data were processed to assess the eligibility of the participants for compensation. Each participant was identifiable by means of an alphanumeric code generated randomly. The staff processing this data only had access to these codes and not to the key identifying them. The EDPS (European Data Protection Supervisor) considered these data pseudonymized [77], but its decision was challenged before the General Court. Using a risk-based approach, the General Court decided that the data were anonymous. Indeed, according to the Breyer Court of Justice case law[61], the additional information (the key) needed to re-identify the data subjects remained inaccessible to the processing staff. The fact that the staff could not legally access the complementary data that would allow re-identification proved enough to consider that no reasonable means existed to re-identify the data, which thus remained non-personal [43]. # 5.2 EUCJ’s case-law on personal data This example shows that the EUCJ’s case law on personal data builds upon its precedents rather than undergoing a radical evolution. On this point, Breyer’s decision [61] was about the dynamic nature of IP addresses. These IP addresses are subject to change with each connection. The plaintiff initiated legal proceedings against the Republic of Germany for its practices concerning the storage and registration of this data. The Court of Justice had to determine whether dynamic IP addresses should be considered personal data for the service provider. The Court decided that the retention of all information by a single individual was not a prerequisite for data being considered as personal. This meant that a third party could retain such re-identifying information, and that this circumstance did not affect the qualification of the data. However, the Court acknowledged that an assessment was necessary to determine the reasonableness of combining this information, taking into account the effort, time, and cost associated with the operation, as well as the accessibility of this additional information (enabling user identification) to the service provider. Given the legal restrictions on such access in Germany, the Court determined that in the absence of legal means to obtain this information, the data in question were not deemed personal. The doctrine posits that two fundamental elements have been applied since the Breyer decision to ascertain the personal nature of data. These are: (1) the distinguishability of the data, defined as the capacity of the data points to identify an individual, and (2) the availability of additional data to “situationally relevant entities” who are capable of associating these data with a physical person[89]. The Scania decision perfectly [64] illustrates this methodology. In this case, the Court was asked to determine the legal status of a vehicle identification number (VIN), a unique alphanumeric code assigned by manufacturers to identify the proprietor of a vehicle. In its decision, the Court stated that the VIN can be personal data for independent operators and vehicle manufacturers if the former have the additional data that enable re-identification, and for the latter if they make the VIN available. The availability of data is considered in conjunction with the capacity of isolating proprietors of the vehicles or all other people who have a title on them. Some authors have suggested an evolution in the interpretation of personal data, attributing this change to the Court’s categorization of independent operators and manufacturers as “situationally relevant entities”, capable of associating VIN with additional identifying information. # Take-away The consistency of the Court’s jurisprudence on the concept of personal data is paramount to contrast the phenomenon of anonymity-washing. This is particularly crucial given the occurrence of poor anonymization practices, which the EUCJ is entitled to sanction. # 5.3 Confusion at the institutional level A relevant case law that sanctioned the European Commission [62] demonstrates the importance of taking into account publicly available data to assess the risk of re-identification. The applicant received European funding as a researcher. The funds had been misappropriated, and the costs were ordered to be reimbursed. The Commission published a press release summarizing the decision, without mentioning the applicant’s direct identifiers in order to protect their privacy. However, the researcher brought an action for the annulment of the press release, since it contained identifiable data. The General Court dismissed it, and the applicant appealed its decision to the EUCJ. The Court considered that “information relating to the gender of a person who is the subject of a press release, that person’s nationality, his or her father’s occupation, the amount of the grant for a scientific project and the geographical location of the entity hosting that scientific project, taken together, contain information that may allow the person who is the subject of that press release to be identified, in particular by those working in the same scientific field and familiar with that person’s professional background” and goes on that this circumstance, “ does not allow the risk of identification of the data subject to be regarded as insignificant.”. The judgment avoids discussing anonymization practices in detail but highlights that this issue is subject to misinterpretations, including among public institutions like the Commission. In fact, deleting direct identifiers seems sufficient to achieve anonymization, even by the EU institutions. # 5.4 Confusion within firms Confusion about anonymization practices is also widespread among companies. A good example is the IAB Europe case-law [63]. The company established a set of guidelines aimed at ensuring compliance with the GDPR concerning the collection of browsing data via a TC String (a series of characters coding the user’s preferences). This string could later be used by companies for commercial purposes. The Court ruled that the TC String was a form of personal data, given its capacity to allow individuals to be identified by associating it with additional information (such as an IP address). Despite third parties retaining this additional information, IAB Europe was able to obtain it. In this regard, the relationship between the EUCJ case-law, employing a riskbased approach to assess the reasonable means likely to be used to re-identify the data, and the WP29 Opinion 5/2014 on anonymization appears complex [90]. On the one hand, discordance persists in the discourse of WP 29 between the zero-risk approach (re-identification must be negligible if not impossible) and the necessity for reasonableness, given that all anonymization techniques are considered imperfect [40]. On the other hand, the need to destroy raw data and to aggregate them in order to achieve anonymization, as required by WP 29, is not met by the case law of the EUCJ, which considers data to be anonymous even if the original data are not deleted, the only relevant aspect being the impossibility (legal rather than technical) to access the additional data that enable re-identification. These contradictions contribute to privacy washing practices by making it difficult to distinguish the company’s bad faith from its lack of knowledge about anonymization methods, especially when data are processed for commercial purposes. In the case of IAB Europe, for example, the company assumed that due to the unavailability of additional information, TC String did not constitute personal data. This mistake is frequently observed among firms. This assertion is supported by numerous decisions adopted by the national DPAs. # 5.5 Confusions arising from the difference between pseudo- and anonymization One of the most relevant decisions on anonymization dates to 5 September 2024, namely the CEGEDIM SANTE case [23]. CEGEDIM designs and sells secretarial software for the medical sector. The company collected patient health data from doctors who agreed to participate in creating a health data repository. These data were allegedly anonymized with k-anonymity techniques. The decision was based on two key factors: (1) the WP29 test 14, which determines whether the data were anonymized, and (2) EUCJ case law. However, the rapporteur isolated a 6-year-old patient with a medical condition, which would suggest pseudonymization, unless re-identification is proved to be “negligible” by “reasonable means”. On this point, the CNIL concluded that the available data could be easily re-identified. However, the company contended that there were few educational materials on anonymization and that the guidelines lacked precision, rendering them unsuitable for legal certainty. Despite CNIL’s rejection of the complaint, the WP 29 Guidelines on anonymization are not updated concerning the current risks associated with k-anonymity. Furthermore, the CNIL has not specified what constitutes a ’negligible risk’. This observation suggests that anonymity-washing may not be a deliberate practice. # Take-away Despite the consistency of the EUCJ case-law relating to personal data, confusion is widespread at the institutional level and among firms as far as the distinction between personal and non-personal data. Such confusion originates from the difficulty in distinguishing anonymization from pseudonymization techniques. This issue could enable unintentional anonymity-washing. # 6 Discussion and overture The gap between regulatory guidance and technical solutions gives space to anonymity-washing, whether intentional or unintentional. While frameworks exist to support anonymization efforts, their inconsistent application, misinterpretation, and continued reliance on outdated methods — repeatedly shown to be ineffective — often create a false sense of compliance and security. Several key issues contribute to this phenomenon. In Section 3 we explain the lack of clear guidance to apply regulations and definitions. Guidelines lack a clear definition of pseudonymization and omit a definition of anonymization. This inconsistency can be a source of confusion and anonymity-washing. Nevertheless, establishing a coherent terminology is not a straightforward task, as personal data can be strongly situation-dependent, and data can be used in numerous ways. We have also shown that guidelines are often outdated or unreliable. It has been shown that the anonymization techniques in the WP29 Opinion 5/2014 on anonymization are no longer reliable, and relevant questions persist, given the contradictory nature of the document. Consequently, guidelines on anonymization remain to be updated, and the information provided by the EDPB on pseudonymization and anonymization of the AI models does not address this gap. Next, we have seen that the differing interpretations of the regulations among authorities compromise their uniform application, leading to legal uncertainty for businesses and organisations. For example, some authorities, like the CNIL, adopt a stringent approach, while others, like the ICO and the DPC, are more flexible. Furthermore, in Section 5.1 we show that there is a lack of awareness among practitioners. They often do not recognize that their data can constitute personal data, leading to a failure to implement privacy by design principles [9]. Such misconceptions could be eliminated by adequate training and guidance that would give the right tools to practitioners and engineers to be able to competently asses their datasets and apply anonymization methods. We believe that one crucial cause of this shortcoming is the lack of understanding of anonymization methods. Many practitioners, particularly those without advanced mathematical skills, struggle to understand and apply fundamental privacy principles. The complexity of privacy-enhancing technologies (PETs) creates an additional barrier, making it difficult for non-specialists to implement effective anonymization (e.g.: [10,30,63]). Another consequence of this educational deficiency is that organizations continue to rely on outdated anonymization methods, such as k-anonymity and l-diversity, despite their well-documented vulnerabilities [46]. This reliance stems from a lack of awareness regarding modern privacy-preserving techniques, as well as limited resources for evaluating and adopting alternative approaches. However, this lag between the state-of-the-art and the most popular, but outdated tools is neither newfound nor unparalleled. There has always been a collaboration gap between academia and industry that limits the transfer of theoretical advancements into practice. Without structured mechanisms to facilitate knowledge-sharing, industry professionals may not only struggle to integrate the latest research into their anonymization strategies but also completely ignore it. # 6.1 Overture To address these issues and mitigate the risks of anonymity-washing, we would like to raise awareness among privacy experts and encourage them to facilitate adoption by practitioners for practitioners. With this objective in mind, we suggest the following actions: We believe that one of the most important action to take is to develop a comprehensive anonymization curriculum that could be promoted and distributed by data protection authorities either in the form of training programs offered or thorough and up-to-date educational resources, such as books and hands-on exercises aided with structured guidance on fundamental privacy principles and their real-world applications. Key components should include: (1) A clear explanation of privacy threats and their manifestations in various datasets. (2) An overview of widely accepted privacy definitions (besides k-anonymity and l-diversity, adding differential privacy and cryptographic methods), including their advantages and drawbacks. (3) How these techniques can defend against said privacy threats. (4) Techniques for evaluating privacy technologies and applying them to real-world scenarios. (5) Strategies for auditing privacy risks and implementing mitigation measures in large-scale datasets. (6) Best practices for integrating privacy considerations into broader software engineering projects. (7) Case studies illustrating data breaches and the consequences of inadequate privacy protections. (8) Adding hands-on learning resources, such as Jupyter notebooks and real-world datasets. Moreover, educational resources should be tailored to diverse audiences. Given the varying levels of expertise among practitioners, privacy education must be designed to accommodate both technical and non-technical professionals. It is also important to emphasize the use of state-of-the-art methods. Practitioners should be trained to critically assess privacy techniques and understand the limitations of traditional methods in modern, large datasets. In order to be sufficiently critical, we should also include adequate privacy risk assessments based on attacker capabilities, data sensitivity, and intended data usage. Finally, we believe that a curriculum of this depth can not be delivered without enhancing the collaboration between academia and industry. We acknowledge that finding a common language between academia and industry is not always straightforward. It takes time and effort: joint initiatives, workshops, and training programs should be encouraged to bridge the gap between theoretical advancements and practical implementation. By realizing these recommendations, organizations and policymakers can move beyond superficial compliance efforts and work toward fostering a robust, meaningful approach to data privacy that will help reduce the prevalence of anonymity-washing and ensure that anonymization practices align with contemporary privacy risks, industrial demands, and capacities. # 6.2 Future work As a support of the collaboration between academia and industry, we wish to create a repository of guidelines and educational materials using the accumulated knowledge that we have used to write this paper. We envision constructing a proper website $^ { 1 5 }$ aided with instructions that would help practitioners navigate and find the appropriate guideline, document, or educational resource to their needs. Furthermore, we conjecture that there is one more potential source of many of the aforementioned problems, namely, the use of popular anonymization tools. Thus, we have already started examining these existing tools; firstly, to corroborate our conjecture, and secondly, to be able to properly include them in the aforementioned website, equipping practitioners with the necessary understanding and comparison of these products. # References 1. Aggarwal, C.C.: On k-anonymity and the curse of dimensionality. In: VLDB. vol. 5, pp. 901–909 (2005) 2. Aliki, E., Marietjie, B., Dusty-Lee, D., Beverley, T., Carmel, S., Donrich, T.: ‘potato potahto’? disentangling de-identification, anonymisation, and pseudonymisation for health research in africa. Journal of Law and the Biosciences 12(1), 1 (2025) 3. Altman, M., Aloni Cohen, K.N.: Acm techbriefs. ACM TechBriefs (2024) 4. ANPD: Agenda regulatoria 2025-2026 (2025), https://www.gov.br/anpd/pt-br/ assuntos/noticias/anpd-publica-agenda-regulatoria-2025-2026 5. ATSUMI, SAKAI: A guide to data protection in japan (2020), https:// www.aplawjapan.com/archives/pdf/data-protection-202009.pdf 6. Ayyamperumal, S.G., Ge, L.: Current state of llm risks and ai guardrails. arXiv preprint arXiv:2406.12934 (2024) 7. BfDI: Die anonymisierung im datenschutzrecht (2022), https://www.bfdi.bund.de/ SharedDocs/Downloads/DE/DokumenteBfDI/Reden_Gastbeitr%C3%A4ge/2022/ Anonymisierung-im-DS-recht.pdf?__blob=publicationFile&v $^ { = 2 }$ 8. BfDI: Aktuelle fragestellungen des datenschutzes (2023), https:// www.bfdi.bund.de/SharedDocs/Downloads/DE/DokumenteBfDI/Reden_Gastbeitr% C3%A4ge/2023/eco-Kompetenzgruppe.pdf?__blob=publicationFile&v $^ { = 2 }$ 9. BfDI: Arbeitspapier zu telemetrie und diagnosedaten (2023), https: //www.bfdi.bund.de/SharedDocs/Downloads/DE/Berlin-Group/20230608_APTelemetrie-Diagnosedaten.pdf?__blob=publicationFile&v $^ { = 3 }$ 10. BfDI: Arbeitspapier zum thema "smart cities" (2023), https:// www.bfdi.bund.de/SharedDocs/Downloads/DE/Berlin-Group/20230608_WPSmart-Cities.pdf?__blob=publicationFile&v $^ { \cdot = 2 }$ 11. BfDI: Datenschutz durch technik – chancen und grenzen von anonymisierung, pseudonymisierung und pets (2024), https://www.bfdi.bund.de/SharedDocs/ Downloads/DE/DokumenteBfDI/Reden_Gastbeitr%C3%A4ge/2024/Datenschutzdurch-Technik-BvD.pdf?__blob=publicationFile&v=1 12. BfDI: Datennutzung vs. datenschutz veranstaltung zum europäischen datenschutztag (2025), https://www.bfdi.bund.de/SharedDocs/Downloads/ DE/DokumenteBfDI/Reden_Gastbeitr%C3%A4ge/2025/Rede-Eu-AkademieInformationsfreiheit-Datenschutz.pdf?__blob=publicationFile&v $^ { = 2 }$ 13. Bhajaria, N.: Data Privacy: A runbook for engineers. Simon and Schuster (2022) 14. Brazil: Lei geral de proteção de dados pessoais (redação dada pela lei $\mathrm { n ^ { \mathrm { { o } } } }$ 13.853, de 2019) (lgpd) (2019) 15. Burkert, C., Federrath, H., Marx, M., Schwarz, M.: Positionspapier zur anonymisierung unter der dsgvo unter besonderer berücksichtigung der tk-branche. Konsultationsverfahren des BfDI 10 (2020) 16. Burt, A., Stalla-Bourdillon, S., Rossi, A.: A guide to the eu’s unclear anonymization standards (2021), https://iapp.org/news/a/a-guide-to-theeus-unclear-anonymization-standards/ 17. CCPA: California consumer privacy act (ccpa) (2020) 18. Chevrier, R., Foufi, V., Gaudet-Blavignac, C., Robert, A., Lovis, C.: Use and understanding of anonymization and de-identification in the biomedical literature: scoping review. Journal of medical Internet research 21(5), e13484 (2019) 19. Cirucci, A.M.: Oversharing the super safe stuff:“privacy-washing” in apple iphone and google pixel commercials. First Monday (2024) 20. CNIL: L’anonymisation des données, un traitement clé pour l’open data (2019), https://www.cnil.fr/fr/lanonymisation-des-donnees-un-traitement-clepour-lopen-data 21. CNIL: L’anonymisation de données personnelles (2020), https://www.cnil.fr/fr/ technologies/lanonymisation-de-donnees-personnelles 22. CNIL: Recherche scientifique (hors santé) $\because$ enjeux et avantages de l’anonymisation et de la pseudonymisation (2022), https://www.cnil.fr/fr/recherchescientifique-hors-sante-enjeux-et-avantages-de-lanonymisation-etde-la-pseudonymisation 23. nationale de l’informatique et des libertés CNIL, C.: Délibération san-2024-013 (5 September 2024) 24. Cohen, A., Nissim, K.: Towards formalizing the gdpr’s notion of singling out. Proceedings of the National Academy of Sciences 117(15), 8344–8352 (2020) 25. Commission, F.T.: Protecting consumer privacy in an era of rapid change, recommendations for businesses and policymakers, ftc report (2012), https://www.ftc.gov/sites/default/files/documents/reports/federaltrade-commission-report-protecting-consumer-privacy-era-rapidchange-recommendations/120326privacyreport.pdf 26. Cormode, G., Srivastava, D., Li, N., Li, T.: Minimizing minimality and maximizing utility: analyzing method-based attacks on anonymized data. Proceedings of the VLDB Endowment 3(1-2), 1045–1056 (2010) 27. Craig, T., Ludloff, M.E.: Privacy and big data: the players, regulators, and stakeholders. " O’Reilly Media, Inc." (2011) 28. De Montjoye, Y.A., Hidalgo, C.A., Verleysen, M., Blondel, V.D.: Unique in the crowd: The privacy bounds of human mobility. Scientific reports 3(1), 1376 (2013) 29. De Montjoye, Y.A., Pentland, A.S.: Response to comment on “unique in the shopping mall: On the reidentifiability of credit card metadata”. Science 351(6279), 1274–1274 (2016) 30. Djiriguian, Missègue, R.: Diffuser une base anonymisée : utopie ou réalitée? Journées de méthodologie statistique de l’Insee (2022), https: //journees-methodologie-statistique.insee.net/gestion-du-secretpour-la-diffusion-grand-public-de-cubes-multidimensionnels-uneexperimentation-au-ssm-agriculture/ 31. DPC: Data protection: The basics (2019), https://dataprotection.ie/sites/ default/files/uploads/2019-07/190710%20Data%20Protection%20Basics.pdf 32. DPC: Guidance on anonymisation and pseudonymisation (2019), https://www.dataprotection.ie/sites/default/files/uploads/2022-04/ Anonymisation%20and%20Pseudonymisation%20-%20latest%20April%202022.pdf 33. EC, E.C.: Directive 95/46/ec of the european parliament and of the council on the protection of individuals with regard to the processing of personal data and on the free movement of such data, oj l 281 (1995) 34. EC, E.C.: Communication from the commission to the european parliament and the council, guidance on the regulation on a framework for the free flow of nonpersonal data in the european union, com/2019/250 final (2019) 35. EC, E.C.: Communication from the commission to the european parliament and the council - two years of application of the general data protection regulation, com/2020/264 (2020) 36. EC, E.C.: Communication from the commission to the european parliement and the councill - second report on the application of the general data protection regulation,com/2024/357 (2024) 37. EDPB, A.: 10 misunderstandings related to anonymisation (2021) 38. EDPB, E.D.P.B.: Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of ai models (2024) 39. EDPB, E.D.P.B.: Guidelines on pseudonymisation - version for public consultation (2025) 40. El Emam, K., Alvarez, C.: A critical appraisal of the article 29 working party opinion 05/2014 on data anonymization techniques. International Data Privacy Law 5(1), 73–87 (2015) 41. Elliot, M., Mackey, E., O’Hara, K.: The anonymisation decision-making framework 2nd edition: European practitioners’ guide (2020) 42. EU-USA: Privacy shield framework (2016), https://eur-lex.europa.eu/eli/ dec_impl/2016/1250/oj/eng 43. EUGC, E.U.G.C.: Srb vs edps, case t-557/20 (26 April 2023) 44. European Parliament, Council of the European Union: Regulation (EU) 2016/679 of the European Parliament and of the Council (gdpr) (2016), https:// data.europa.eu/eli/reg/2016/679/oj 45. European Parliament, Directorate-General for Internal Policies, P.D.E., Policy, S.: Industry, research and energy, data flows- future scenarios, in-depth analysis for the itre committee (2017) 46. Gadotti, A., Rocher, L., Houssiau, F., Creţu, A.M., De Montjoye, Y.A.: Anonymization: The imperfect science of using data while preserving privacy. Science Advances 10(29), eadn7053 (2024) 47. Garfinkel, S.L.: De-identification of personal information. National Institute of Standard and Technology (2015) 48. GPDP: Provvedimento [10090499] (13 November 2024) 49. GPDP: Guida all’applicazione del regolamento europeo in materia di protezione dei dati personali (2023), https://www.garanteprivacy.it/documents/10160/0/ Guida+all+applicazione+del $^ +$ Regolamento $^ +$ UE $^ +$ 2016+679.pdf/2281f960-a7b2- 4c53-a3f1-ad7578f8761d?version ${ \tt \Omega } = 2 . 0$ 50. ICO: Anonymisation: managing data protection risk code of practice (2019) 51. ICO: (draft) anonymisation, pseudonymisation and privacy enhancing technologies guidance (2022), https://ico.org.uk/about-the-ico/ico-and-stakeholderconsultations/ico-call-for-views-anonymisation-pseudonymisation-andprivacy-enhancing-technologies-guidance/ 52. ICO: Privacy-enhancing technologies (pets) (2023), https://ico.org.uk/fororganisations/uk-gdpr-guidance-and-resources/data-sharing/privacyenhancing-technologies/ 53. of India, D.S.C.: Balancing privacy and innovation: Anonymisation standards for indian data (2023) 54. INSEE: Risque de ré-identification $\because$ deux questions pratiques relatives au critère de la l-diversité (2019), https://www.insee.fr/fr/information/4277545 55. INSEE: Guide du secret statistique (2024), https://www.insee.fr/fr/ information/1300624 56. INSEE: Les méthodes perturbatives d’anonymisation des données individuelles (2024), https://www.insee.fr/fr/statistiques/fichier/4277545/1- SMS_secret_24_juin_2019.pdf 57. INSEE, M.B.: La question de la confidentialité des données individuelles (2016), https://www.insee.fr/fr/statistiques/2535625 58. Japan: Act on the protection of personal information (act no. 57 of 2003) (last version act no. 37 of 2021) (2003), https://www.japaneselawtranslation.go.jp/ en/laws/view/4241/en 59. Jarmul, K.: Practical data privacy. " O’Reilly Media, Inc." (2023) 60. of Justice EUCJ, E.U.C.: Judgment of the court (grand chamber): Data protection commissioner v facebook ireland limited and maximillian schrems (16 July 2020) 61. of Justice EUCJ, E.U.C.: Breyer, case c-582/14 (19 October 2016) 62. of Justice EUCJ, E.U.C.: Goc vs european commission, case c-479/22 p. (7 March 2024) 63. of Justice EUCJ, E.U.C.: Iab europe, case c-604/22. (7 March 20246) 64. of Justice EUCJ, E.U.C.: Gesamtverband autoteile-handel (accès aux informations sur les véhicules), case c-319/22 (9 Novembre 2023) 65. Langarizadeh, M., Orooji, A., Sheikhtaheri, A.: Effectiveness of anonymization methods in preserving patients’ privacy: a systematic literature review. Health Informatics Meets eHealth pp. 80–87 (2018) 66. Levi-Valensin, M.: Gestion du secret pour la diffusion grand public de cubes multidimensionnels: une expérimentation au ssm agricultiure. Journées de méthodologie statistique de l’Insee (2022) 67. Mamanazarov, S.: De-identification and anonymization: legal and technical approaches (2024) 68. Moon-Ho Joo, H.Y.K.: Comparison of personal information de-identification policies and laws within the eu, the us, japan, and south korea. Government Information Quarterly 40, 101805 (2023) 69. Narayanan, A., Felten, E.W.: No silver bullet: De-identification still doesn’t work. White Paper 8 (2014) 70. Narayanan, A., Shmatikov, V.: Robust de-anonymization of large sparse datasets: a decade later. May 21, 2019 (2019) 71. Nguyen, B.: Techniques d’anonymisation. Statistique et société 2(4), 53–60 (2014) 72. Nguyen, B., Castelluccia, C.: Techniques d’anonymisation tabulaire: concepts et mise en oeuvre. arXiv preprint arXiv:2001.02650 (2020) 73. Nissenbaum, H.: Privacy in context: Technology, policy, and the integrity of social life. In: Privacy in context. Stanford University Press (2009) 74. OECD: Cross-border data flows, policy sub-issue (2022), https://www.oecd.org/ en/topics/cross-border-data-flows.html 75. Ohm, P.: Broken promises of privacy: Responding to the surprising failure of anonymization. UCLA l. Rev. 57, 1701 (2009) 76. Parliament, E., of the Council: Regulation (eu) 2024/1689 of the european parliament and of the council of 13 june 2024 laying down harmonised rules on artificial intelligence and amending regulations (ec) no 300/2008, (eu) no 167/2013, (eu) no 168/2013, (eu) 2018/858, (eu) 2018/1139 and (eu) 2019/2144 and directives 2014/90/eu, (eu) 2016/797 and (eu) 2020/1828 (artificial intelligence act), pe/24/2024/rev/1, oj l, 2024/1689, (2024) 77. Parliament, E., of the Council: Regulation (eu) 2018/1725 on the protection of natural persons with regard to the processing of personal data by the union institutions, bodies, offices and agencies and on the free movement of such data, and repealing regulation (ec) no 45/2001 and decision no 1247/2002/ec, pe/31/2018/rev/1, oj l 295, 21.11.2018, p. 39–98, art 15 (1) d (23 October 2018) 78. Party, A..D.P.W.: Opinion 05/2014 on anonymisation techniques (2014) 79. Party, A..D.P.W.: Opinion on the concept of personal data (2007) 80. PDPC: Guide to basic anonymisation (2022) 81. Peloquin, D., DiMaio, M., Bierer, B., Barnes, M.: Disruptive and avoidable: Gdpr challenges to secondary research uses of data. European Journal of Human Genetics 28(6), 697–705 (2020) 82. Québec: Regulation respecting the anonymization of personal information (2024) 83. Rubinstein, I.S., Hartzog, W.: Anonymization and risk. Wash. L. Rev. 91, 703 (2016) 84. council of Advisors on Science, P., Technology, W.H.: Big data and privacy: A technological perspective (2014) 85. Secretariat, P.I.P.C.: Anonymously processed information towards balanced promotion of personal data utilization and consumer trust (2017), https://www.ppc.go.jp/files/pdf/ The_PPC_Secretariat_Report_on_Anonymously_Processed_Information.pdf 86. Sharma, S.: Data privacy and GDPR handbook. John Wiley & Sons (2019) 87. Stadler, T., Troncoso, C.: Why the search for a privacy-preserving data sharing mechanism is failing. Nature Computational Science 2(4), 208–210 (2022) 88. Stalla-Bourdillon, S., Burt, A.: The definition of ’anonymization’ is changing in the eu: Here’s what that means (2023), https://iapp.org/news/a/the-definitionof-anonymization-is-changing-in-the-eu-heres-what-that-means 89. Stalla-Bourdillon, S.: Identifiability, as a data risk: Is a uniform approach to anonymisation about to emerge in the eu? Available at SSRN (2025) 90. Stalla-Bourdillon, S., Knight, A.: Anonymous data v. personal data-false debate: an eu perspective on anonymization, pseudonymization and personal data. Wis. Int’l LJ 34, 284 (2016) 91. Stallings, W.: Information privacy engineering and privacy by design: Understanding privacy threats, technology, and regulations based on standards and best practices. Addison-Wesley Professional (2019) 92. Sénéchal, J.: Publication de l’avis de l’edpb du 17 décembre 2024 sur le traitement des données personnelles dans le contexte des modèles d’ia : prémices d’une mutation profonde du rgpd ? (2024), https: //www.dalloz-actualite.fr/flash/publication-de-l-avis-de-l-edpb-du17-decembre-2024-sur-traitement-des-donnees-personnelles-da 93. USA: Health insurance portability and accountability act (1996) 94. Wallace, S.E.: What does anonymization mean? datashield and the need for consensus on anonymization terminology. Biopreservation and Biobanking 14(3), 224 (2016) 95. Weitzenboeck, E.M., Lison, P., Cyndecka, M., Langford, M.: The gdpr and unstructured data: is anonymization possible? International Data Privacy Law 12(3), 184–206 (2022) 96. Wolff, J., Lehr, W., Yoo, C.S.: Lessons from gdpr for ai policymaking. Va. JL & Tech. 27, 1 (2023) 97. Wood, A., Altman, M., Bembenek, A., Bun, M., Gaboardi, M., Honaker, J., Nissim, K., O’Brien, D.R., Steinke, T., Vadhan, S.: Differential privacy: A primer for a nontechnical audience. Vand. J. Ent. & Tech. L. 21, 209 (2018)
Anonymization is a foundational principle of data privacy regulation, yet its practical application remains riddled with ambiguity and inconsistency. This paper introduces the concept of anonymity-washing -- the misrepresentation of the anonymity level of ``sanitized'' personal data -- as a critical privacy concern. While both legal and technical critiques of anonymization exist, they tend to address isolated aspects of the problem. In contrast, this paper offers a comprehensive overview of the conditions that enable anonymity-washing. It synthesizes fragmented legal interpretations, technical misunderstandings, and outdated regulatory guidance and complements them with a systematic review of national and international resources, including legal cases, data protection authority guidelines, and technical documentation. Our findings reveal a lack of coherent support for practitioners, contributing to the persistent misuse of pseudonymization and obsolete anonymization techniques. We conclude by recommending targeted education, clearer technical guidance, and closer cooperation between regulators, researchers, and industry to bridge the gap between legal norms and technical reality.
[ "cs.CR", "cs.DB" ]
# 1 Introduction In software development, AI-assisted code generators have become vital to increase productivity, maintain consistency, enforce standards, and refine existing codebases. Industry leaders are increasingly adopting LLMs to automate the process of code generation, testing, and project document writing. Similarly, LLMs help developers create code structures based on specifications, enhance development efficiency, and decrease the likelihood of errors. Traditionally, evaluations of code generators have centered around runtime efficiency and code quality, while they commonly ignore the energy consumption and environmental impact of generated code. However, evolutions in Generative AI and the increasing demand for Information Technology (IT) software systems have caused emerging sustainability issues, particularly due to generative models [1]. The IT industry contributes about $10 \%$ of global energy use today [2] and is responsible for approximately $3 \%$ of global carbon emissions[3], exceeding the emissions of the aviation industry [4]. Training a large neural network can cause over 626,000 pounds of $\mathbf { C O } _ { 2 }$ emissions, nearly five times the lifetime emissions of an average car [5]. The demand for computing power, mainly due to technologies like Generative AI, is expected to cause data centers to consume $20 \%$ of global electricity by 2030 [6]. On the other hand, improving these power-hungry models regarding energy efficiency can reduce the high power demand. Li et al. [7] show that transformations in code execution can enable savings in cloud computing costs of around 42 percent without losing any functionality. This paper fills an important gap in this area by investigating the energy efficiency of code generated by LLMs and comparing it to canonical human-written solutions. We systematically evaluate the energy efficiency of code snippets generated by 20 popular LLMs for 878 programming problems with different difficulty levels selected from EffiBench [8]. We compare generated code with humangenerated canonical solutions to find patterns by measuring and evaluating the energy consumption. This study allows us to gain insights into the environmental impact and economic cost of using LLMgenerated code. We highlight the importance of sustainable models in code generation. By raising awareness of the energy costs associated with AI-assisted code generation, we aim to encourage the development of AI tools that produce environmentally sustainable code. Among the studied LLMs, DeepSeek-v3, GPT-4o, and Claude-3.5-Sonnet generate the most energyefficient code in general, whereas Llama-3.3-70B, Grok-2, and Gemini-1.5-Pro are among the least energy-efficient models. On average, human-generated canonical solutions were approximately 1.17 times more energy efficient than DeepSeek-v3, 1.2 times more energy efficient than GPT-4o and Claude-3.5-Sonnet, 1.93 times more energy efficient than Llama-3.3-70B, and over 2 times more energy efficient than Grok-2 and Gemini-1.5-Pro. For specific algorithmic groups such as dynamic programming, backtracking, and bit manipulation, the LLM-generated code can consume significantly more energy compared to human-generated canonical solutions. For these problem categories, GPT-4o generates solutions consuming up to 46 times more energy than the canonical solution. Similarly, LLaMA-3.3-70B generates solutions with energy consumptions up to 149 times that of the canonical solution, and Gemini-1.5-Pro generates solutions that consume energy up to 449 times that of the canonical solution. These results suggest that energy efficiency optimization should become an important consideration in the development of next-generation AI-assisted code generation systems. In summary, the paper makes the following contributions: (1) We conduct an extensive evaluation of 20 LLMs, and compare the energy efficiency of the code generated by them. (2) We ensure fair prompt inputs during code generation and fair comparison of LLMs against each other and canonical solutions by considering problems correctly generated by all models. (3) We propose a comprehensive evaluation framework that jointly measures energy consumption, runtime performance, memory usage, the number of input and output tokens used for code generation, and the monetary cost of generating the code, providing a holistic understanding of the environmental and economic costs associated with LLM-generated code. (4) Our findings reveal that while advanced LLMs like DeepSeek-v3, GPT-4o, and Claude3.5-Sonnet can produce more efficient code than other models, they still are considerably less efficient than human-written solutions in terms of energy consumption. (5) Our analysis identifies clear patterns in energy inefficiency, showing that LLMs particularly struggle with Dynamic Programming, Backtracking, Bit Manipulation, and Greedy algorithms; while performing relatively better on problems involving Binary Search and Divide and Conquer. # 2 Related Works Green Software Practices: The idea of green software practices relates to energy consumption and sustainability in the end-to-end software development lifecycle [9, 10]. Green software examples include energy-aware coding and sustainable development methodologies [11, 12]. Sustainability requires consideration from the start of all aspects of development, rather than an after-thought [10]. Clean code methodology can help to execute with better efficiencies [13], and it can focus on reducing instructions, removing duplicate code, and optimizing an algorithm [14]. Additional suggestions to reduce energy use include parallelization, caching, and compression [15]. Henderson et al. [16] offer standardized energy reporting to allow others’ projects to replicate it. The ESC framework provides a consistent sustainability computing paradigm to take a holistic approach [17]. Addressing the carbon footprint of computation, the study on "Green Algorithms" [18] introduces a quantitative model for assessing the environmental impact of computational processes. Efficiency of LLM-generated Code: When examining efficiency for LLM-generated code, models should consider correctness, memory, runtime, and energy [19]. Chen et al. [20] propose developing multi-faceted assessments. EffiBench [8] constructs an LLM code efficiency benchmark with 1000 coding problems representing different algorithmic complexities. Cruz et al. [21] state that efficiency can only be evaluated after correctness is established, whereas Niu et al. [22] argue that efficiency depends neither on correctness nor model size, but efficiency does scale up through prompting piecewise. LLMs have a well-documented issue with selecting unoptimized algorithms and iterations and data structures [23]; perhaps the most obvious comparison is QuickSort vs InsertionSort regarding time complexity [23]. Energy-aware prompting could make a 30 percent reduction [24], and subsequent feedback loops using evaluator LLMs could identify further means for optimizing the code generated [25]. In green code, the system produces emission reductions of 23 to 50 percent for generation tasks, through reinforcement learning, rewarding the reduction of emissions [26]. Wang et al. [27] study the effect of providing energy rewards. There is also emerging research interest in the utility of prompt engineering for reduced energy use, though with less clearly predictable results so far [28]. Vartziotis et al. [1] define "green capacity" to capture sustainability in AI-generated code. # 3 Benchmarking Approach and Experiments To construct our benchmark for comparing different LLMs in terms of their energy efficiency, we use an approach similar to EffiBench [8], which is inspired by the common practice of evaluating developers’ coding ability using problems from the competitive coding platform – LeetCode [29]. EffiBench includes 1000 Leetcode problems that are asked in interviews frequently $( > 4 0 \% )$ . These problems are paired with the most efficient solutions from the LeetCode discussion forum, labeled as the canonical human-written solutions. 100 test cases for each problem are included in EffiBench which are generated using a test case generator based on GPT-3.5-turbo. # 3.1 Problem Dataset Selection Before using the EffiBench dataset for our study, we thoroughly analyzed and tested the dataset. The following are the findings from our analysis: (i) 12 problems in the dataset do not have comprehensive test cases; (ii) 110 problems throw errors when we run canonical solutions against comprehensive test cases. This is due to one or all of the following reasons: syntax errors in canonical solutions, syntax errors in comprehensive test cases, and improper definition of test cases for TreeNode, GraphNode, and LinkedList problems. We excluded these 122 problems from our dataset and considered 878 problems for our comprehensive study. Table 1: Algorithm categories and difficulty-wise problem distribution in the selected dataset. Table 1 shows the detailed breakdown of the 878 problems we use to compare the energy efficiency of the LLMs in our study. There are 145 easy, 510 medium, and 223 hard problems in the dataset. Leetcode defines easy, medium, and hard problems based on the complexity of the algorithms or data structures required to solve the problems. The algorithmic methods include Greedy, Dynamic Programming (DP), Backtracking, Divide and Conquer, Depth-First Search (DFS), Breadth-First Search (BFS), Binary Search, Two Pointers, Sliding Window, Bit Manipulation, and Sorting. The diverse set of algorithmic methods in the problem set provides a fair comparison of the studied LLMs across multiple problem subcategories with different computational complexities. In the table, one problem may be tagged to more than one algorithmic category and hence the sum of the number of problems across different algorithmic categories for a given difficulty level may be greater than the reported total. # 3.2 LLMs Under Study We analyze 20 popular LLMs that are widely used by developers for code generation tasks. In our study, we choose 7 open-source models (from DeepSeek, Meta, and Mistral), and 13 closed-source models (from Amazon, Antropic, Google, OpenAI, and xAI). The selected models are listed in Table 2. The table also shows the access type and cost of using each model – based on the cost per 1 Million input tokens used and the cost per 1 Million output tokens generated. Some closed-source models such as GPT-4 Turbo and Claude 3.5 Sonnet incur significantly higher token processing costs, with input/output costs reaching up to $\$ 5/ 515$ per Million tokens. In contrast, open-source models such as Llama are less expensive to access with input/output costs as low as $\$ 0.05/ 50.08$ per Million tokens. Among all LLMs studied, the least expensive one is Nova-micro, with input/output costs of $\$ 0.02/450.07$ per Million tokens. The input/output token cost information is used in our study to compare the average cost of using each model to generate the correct code for the set of given problems. The cost of open-source LLMs was determined based on their publicly available API pricing, without accounting for any additional hardware or infrastructure costs. Specifically, the models were accessed via third-party platforms such as Fireworks.ai and Groq Cloud, where the API usage charges directly reflect the input and output token processing costs. Since these experiments did not run the models on private hardware or owned cloud servers, no supplementary hardware-related expenses were included in the cost calculations. Table 2: List of LLMs included in our study, their access types, and cost information (green color highlights the lowest cost, whereas red color highlights the highest cost). # 3.3 Code Generation Each LLM receives a standard prompt that consists of the problem statement, input/output specification, constraints, and example test cases (see Appendix for a sample prompt). To ensure fairness, all models are tested using the exact same prompt structure. This makes sure that all performance differences stem from the model’s respective behaviors rather than a deviation in the representation of the task itself. For the LLMs, the temperature parameter is set to default values (for Llama-3.1-70B it is 0.5; for Nova, Mistral, Llama-3.1-8B, and Llama-3.3-70B it is 0.7; and for the rest of the models it is 1.0). The model will then produce an initial code solution. To ensure the correctness of the generated code prior to measuring efficiency, each solution followed: (1) Syntax check to ensure compilability; (2) Run against 100 test cases, sourced from EffiBench; and (3) Verify against edge cases to confirm robustness. If that solution does not compile or does not pass all 100 test cases for that problem, the LLM is asked to regenerate the solution. During the regeneration phase, we provide the model with its original prompt as well as execution feedback on the code it just produced, prompting it to produce a new solution. This process can repeat up to a maximum of 25 iterations per problem, retaining the first solution that returns correct results in the interest of assessing energy and memory usage (see Appendix A and for the details of the code generation workflow). # 3.4 Code Energy Consumption Measurements For collecting energy metrics, we utilize the perf[30] tool’s power monitoring capabilities. Specifically, we use the power/energy-pkg/ for measuring the energy consumption of the entire processor socket, including all cores and cache; power/energy-ram/ for measuring the energy consumption of the random access memory (RAM) attached to the integrated memory controller; and cpu-clock for measuring the execution time of the code. Our methodical approach consisted of several key steps to ensure the accuracy of the energy measurements: (1) Before running any problem code, we calculate the idle power consumption in the target system for a 30-second period to establish a baseline; (2) For each execution of the problem code, we calculate the adjusted energy consumption by subtracting the baseline idle power; (3) A cooldown period of 10 seconds is implemented between executions to prevent thermal interference; and (4) Each problem code is run 5 separate times in random orders and the results are averaged to ensure statistical validity (see Appendix A.3 and Appendix B.2 for the details of the energy measurement methodology). # 3.5 Code Memory Consumption Measurements For collecting memory metrics, we utilize python library Memory_Profiler. This library helps us sample the memory used by the process at the given intervals (in this case 0.001 seconds). For each problem code, we measure the Average Memory Consumption Over Time, which is expressed in Megabytes $*$ seconds and shows how much memory a process uses and for how long, providing a cumulative view of memory consumption (see Appendix A.3 and Appendix B.3 for the details of the memory consumption measurement methodology). # 3.6 Testing Environment To ensure fairness in our comparative analysis, all models were evaluated under identical conditions. This approach eliminated hardware and software variabilities, allowing for an effective comparison of performance metrics. Our standardized test environment consisted of: (1) Platform: Chameleon Cloud [31] ; (2) Processor: Intel Xeon Gold 6126 (24 cores); (3) Memory: 192 GiB RAM; (4) Operating System: Ubuntu 24.04.1 LTS; and (5) Kernel: 6.8.0-51-generic. # 4 Evaluation and Analysis In this section, we present the findings of our experimental results. We analyze the results obtained and evaluate the performance of LLMs in terms of energy consumption and cost. We also identify the performance of LLMs compared to canonical solutions written by humans. At the same time, we present the cost breakdown of the LLMs to carry out the tasks. # 4.1 Code Generation Accuracy Analysis Since we do not have direct access to the server-side infrastructure of commercial LLMs to measure their computational resource usage, we develop an alternative approach to estimate the relative inference efficiency of these models during code generation. Our methodology captures both the success rate and the token-based resource consumption which serves as a proxy for computational costs. To perform this assessment across different LLMs, we implement a systematic approach that accounts for both the success rate and the input/output tokens required for successful code generation by each model. Our analysis procedure follows a structured iteration approach. For each problem, we track the sequence of generation attempts until success. For each problem in our dataset (Table 1), we execute code generation (up to 25 times) across all LLM models under study, until a successful code passing all 100 tests is generated. This repetition allows us to calculate three key metrics: (1) Average Pass $@$ : This value indicates how many attempts are required before successfully generating a solution that passes all test cases. A lower value suggests more efficient generation. (2) Average Total Input Tokens: We measure the number of input prompt tokens across successful generations, which directly correlates with API costs when using LLM services. (3) Average Total Output Tokens: This represents the size of the generated code solutions, which also influences the operational costs of deploying these models. Table 3 provides comparative performance evaluation across a variety of LLMs through $\mathrm { P a s s } @ 1$ Pass $@ 1 0$ , and Pass $\textcircled { a } 2 5$ metrics for our entire problem set (introduced in Table 1). The pass metrics provide the empirical probability that at least one correct solution is generated within the first, tenth, and twenty-fifth attempt respectively across a common set of problems and provided prompt. Table 3: Comparative analysis of Large Language Models (LLMs) in terms of Pass $@ \mathbf { k }$ accuracy and the average number of input and output tokens needed to generate the correct code. For all three pass rates, DeepSeek-v3, GPT-4o, Gemini 2.0 Flash, and Claude 3.5 Sonnet consistently position among the best-performing models in terms of the correctness of the code generated, demonstrating reliability with problem-solving accuracy. DeepSeek-v3 has the highest Pass $@ 1$ score of $8 3 . 6 \%$ , whereas both DeepSeek-v3 and GPT-4o have Pass $@ 1 0$ scores of $8 9 . 7 \%$ and $8 9 . 1 \%$ respectively, and finally GPT-4o has the highest Pass $\textcircled { a } 2 5$ score of $9 2 . 0 \%$ . Nova-Micro and NovaLite are the worst performers in this experiment, barely exceeding $50 \%$ at $\mathrm { P a s s } @ 2 5$ . Models like Gemini 2.0 Flash $( \mathfrak { c } 0 . 0 2 3 )$ and Gemini 2.0 Flash-Lite $( \ast 0 . 0 1 6 )$ demonstrate competitive Pass $@ 1 0$ and Pass $\textcircled { a } 2 5$ rates above $82 \%$ , at very low costs, offering more affordable options for large-scale code generation. # 4.2 Code Energy Efficiency Analysis To comprehensively assess the energy efficiency of LLMs, the evaluation was conducted over two distinct benchmark sets. The first set comprises 298 common problems that all 20 LLMs successfully solved, and they formed a relatively equal distribution of algorithmic categories such as Divide & Conquer, Binary Search, and Bit Manipulation. The second benchmark expands the analysis to a larger and more diverse set of 576 common problems that were successfully solved by 11 LLMs, providing insights into how these models handle a wider range of algorithmic complexities and problem difficulties. This provided an overall understanding of how LLMs addressed a broader class of algorithmic complexity and problem difficulty. For each model, along with the average Pass $@$ rate, average input token number, and average output token number, the following key efficiency metrics were computed across all common problems: (1) Avg. Cost of Code Generation (Cents): Represents the average monetary cost required to generate a code solution for each problem using the respective LLM. (2) Avg. Package Energy (Joules): Energy consumed by the entire processor socket, including all cores and cache. (3) Avg. RAM Energy (Joules): Energy consumed by the RAM. (4) Avg. Total Energy (Joules): Represents the combined energy consumption of the processor package and RAM during the execution of generated code. This metric reflects the overall energy consumption of the solution. (5) Avg. Runtime (milliseconds): Captures the average time required to execute the generated solutions. (6) Avg. Memory Consumption (MB-sec): Represents the total memory usage over time during code execution, measured as the integral of memory usage over runtime. # 4.2.1 Benchmark Set - I: 20 LLMs & 298 Common Problems Of all 20 models assessed, we recognize a subset of 298 problems from our dataset in which each LLM produces a correct answer passing all tests within the 25-regeneration limit. The algorithm-wise and difficulty-wise distribution of these 298 common problems is provided in Table 4. The majority of the problems fall under the categories of Binary Search (50), Sorting (85), and Bit Manipulation (35). In terms of difficulty, most problems are of medium complexity (179), followed by easy (89) and hard (30) problems. This intersection guarantees that all relative efficiency experiments including energy, memory, token costs, etc. take place with a fair, unambiguous set of problems. By narrowing the evaluation emphasis to the intersection subset of problems, we remove the impact of differing model pass rates to ensure appropriate, consistent, and interpretable comparisons across models. Table 4: Algorithm and difficulty-wise problem distribution on Benchmark Set - I. We present the evaluation results for this subset of problems in this section. The performance and resource utilization of human-written canonical solutions, compared to the LLMs, are illustrated in Table 5. Despite recent advancements in model correctness, as measured by $\operatorname* { P a s s } ( { a } )$ rates, the canonical solutions consistently outperform all evaluated LLMs across key sustainability metrics, including energy consumption and memory efficiency. Average Total Energy Consumption: On average, canonical solutions require only 5.77 J, significantly lower than any LLM-generated solutions. The most efficient LLM, DeepSeek-v3, consumes 5.91 J, while others range from 6.12 J to $1 2 . 0 0 \mathrm { J }$ . Average Runtime: Human-written code executes in just $7 4 . 1 6 \mathrm { m s }$ , outperforming all LLM-generated solutions, which exhibit runtimes ranging from $7 5 . 6 4 \mathrm { m s }$ (DeepSeek-v3) to 147.95 ms (GPT-4 Turbo). Average Memory Usage: Canonical solutions also demonstrate superior memory efficiency with an average memory consumption of $8 . 7 0 \mathbf { M B } { \cdot } \mathbf { s } .$ , lower than nearly all LLM-generated solutions. Only DeepSeek-v3 $( 8 . 6 7 ~ \mathrm { M B } { \cdot } \mathrm { s } ) ,$ ) achieves comparable memory efficiency, while other LLM-generated solutions exhibit higher memory consumption, ranging from $9 . 0 2 \mathrm { M B } { \cdot } \mathrm { s }$ to $1 1 . 5 7 \mathrm { M B } { \cdot } \mathrm { s }$ . Table 5: Performance and resource usage comparison of LLMs against canonical solutions on Benchmark Set - I. When we analyze the results based on the problem difficulty level, we observe that all LLMs perform similarly to canonical solutions on easy problems. For medium-difficulty problems, solutions generated by DeepSeek-v3 consume approximately 1.02 times the energy of canonical solutions, while Nova-Lite consumes around 1.20 times. GPT-4 Turbo performes the worst on medium problems, consuming 2.08 times more energy than canonical solutions. For hard problems, DeepSeek-v3 performs comparably to canonical solutions in terms of energy efficiency, while solutions generated by Llama-3.1-70B consume approximately 1.40 times more energy. Analyzing performance across different algorithmic categories, LLM-generated solutions for problems involving BFS, DFS, and Two-Pointer algorithms achieve energy efficiency similar to canonical solutions. However, LLM-generated solutions for problems involving Sorting and Dynamic Programming consistently require more energy. For Binary Search and Bit Manipulation problems, most LLMs generate code that is as efficient as canonical solutions, except for Llama-3.1-70B and Llama-3.3-70B, which produce solutions consuming significantly more energy. Across all algorithms and difficulty levels, DeepSeek-v3 and GPT-4o consistently outperform other LLMs in terms of energy efficiency and runtime performance (see Appendix D and Appendix E for the detailed results based on the problem difficulty level and across different algorithmic categories). # 4.2.2 Benchmark Set - II: 11 LLMs & 576 Common Problems Table 6 presents the algorithm-wise and difficulty-level breakdown of the 576 problems included in Benchmark Set II. These 576 problems were successfully passed by 11 LLMs within the 25- regeneration limit. A significant portion of these problems fall under the categories of Dynamic Programming (158), Greedy algorithms (149), and Sorting (155). With respect to difficulty, the majority of problems are of medium complexity (347), followed by hard (110) and easy (119). Table 6: Algorithm and difficulty-wise problem distribution on Benchmark Set - II. Table 7 summarizes the performance and resource costs of human-written canonical solutions compared to LLM-generated solutions for the second benchmark set. The problems in this set are more complex, and the canonical solutions consistently outperform the LLM-generated solutions across all relevant efficiency metrics. Moreover, while some LLMs demonstrate better model correctness through higher Pass $@$ rates, these improvements are often accompanied by significant increases in energy and memory consumption. Average Total Energy Consumption: On average, canonical solutions require only $5 . 4 6 \mathrm { J }$ , substantially lower than any LLM-generated solutions. While DeepSeek-v3 remains the most energy-efficient LLM (6.37 J), it still consumes $1 6 . 7 \%$ more energy than the canonical solutions. Interestingly, Gemini1.5-Pro exhibits the best Avg. Pass $@$ score (1.056) for this set of problems, but its energy consumption is among the highest at 11.15 J. Average Runtime: Human-written code maintains the lowest average runtime at $6 9 . 3 6 \mathrm { m s }$ , consistently outperforming all LLM-generated solutions. Although smaller models typically demonstrate faster runtimes, some models like Claude-3.5-Haiku still require $8 8 . 5 6 ~ \mathrm { m s }$ . Larger models such as Llama-3.1-70B and Llama-3.3-70B exhibit even higher runtimes at $1 1 2 . 0 1 ~ \mathrm { m s }$ and $1 3 0 . 0 2 ~ \mathrm { m s }$ , respectively. The highest runtime is by Gemini $1 . 5 \mathrm { P r o }$ at $1 3 7 . 1 4 \mathrm { m s }$ . Table 7: Performance and resource usage comparison of LLMs against canonical solutions on Benchmark Set - II. Average Memory Usage: Canonical solutions also demonstrate superior memory efficiency with an average memory consumption of 6.62 MB s. These values are consistently lower than those of any LLM-generated solutions in the study, where memory usage ranges from 6.90 MB·s (DeepSeek-v3) to a maximum of $1 0 . 7 3 { \mathrm { M B } } { \cdot } { \mathrm { s } }$ (Llama-3.3-70B). When we analyze the results based on difficulty, for easy problems all LLM-generated code performs similarly to that of canonical solutions. Llama-3.1-70B and Llama-3.3-70B models are comparatively better in giving energy-efficient solutions to medium problems than hard problems. Analyzing the results for various algorithms, DeepSeek-v3 consistently performs better than all other LLMs in almost all categories, except for BFS, Backtracking, and Bit Manipulation. Llama models (3.1- 70B, 3.3-70B) perform poorly across almost all algorithmic categories. While Grok-2 performs relatively worse than all other LLMs for Dynamic Programming and Binary Search, it gives more energy-efficient solutions to problems involving DFS, Two Pointers, Sorting, and Greedy algorithms. Gemini-1.5 Pro performs poorly in Dynamic Programming and Bit Manipulation while performing relatively well in other categories. GPT-4 Turbo performs poorly in Sorting-based problems while doing better in others.
As the quality of code generated by Large Language Models (LLMs) improves, their adoption in the software industry for automated code generation continues to grow. Researchers primarily focus on enhancing the functional correctness of the generated code while commonly overlooking its energy efficiency and environmental impact. This paper investigates the energy efficiency of the code generated by 20 popular LLMs for 878 programming problems of varying difficulty levels and diverse algorithmic categories selected from the LeetCode platform by comparing them against canonical human-written solutions. Although LLMs can produce functionally correct results in most cases, our findings show that the performance and energy efficiency of LLM-produced solutions are often far below those of human-written solutions. Among the studied LLMs, DeepSeek-v3 and GPT-4o generate the most energy-efficient code, whereas Grok-2 and Gemini-1.5-Pro are among the least energy-efficient models. On average, human-generated canonical solutions are approximately 1.17 times more energy efficient than DeepSeek-v3, 1.21 times more energy efficient than GPT-4o, and over 2 times more energy efficient than Grok-2 and Gemini-1.5-Pro. For specific algorithmic groups such as dynamic programming, backtracking, and bit manipulation, LLM-generated code can consume up to 450 times more energy than human-generated canonical solutions.
[ "cs.SE", "cs.AI" ]
# 1 Introduction Brain disorders such as Alzheimer’s disease together with brain tumors are substantial global health issues. Millions of patients are impacted, and medical practitioners face challenges in diagnosing them [2]. Studies have demonstrated that, depending on the patient’s age and the brain region involved, 52% of patients had Alzheimer’s Disease Neuropathologic Change (ADNC) in the glioblastomaadjacent cortex tissue [9]. According to research, there is presently a demand for enhanced computational diagnostics to increase diagnostic speed due to the rising number of people with Alzheimer’s disease (246 million globally, 39 million of whom are critical) and brain tumors. According to the World Health Organization [24], since over 3 billion individuals worldwide suffer from various brain diseases, early detection methods become crucial. In order to detect diseases medical personnel perform interpretive analysis of Magnetic Resonance Imaging (MRI) which scans the human body, even though it’s a revolutionary invention it is still a slow process and contains manual errors. Although in the current circumstances, MRI is the only leading medical tool for discovering brain diseases that involve neurodegeneration and cancer disorders. However, acquiring the MRI report requires people to see the doctor again to obtain the findings, which is both costly and time-consuming. For better medical outcomes, timely, accurate diagnosis requires both accuracy and efficiency, as evidenced by the number of diagnosed patients. The goal of this research is the development of an efficient and quick detection model using MRI technology in recognition of Alzheimer’s disease and brain tumors at an early stage. In combining advanced imaging methods with machine learning, this study hopes to improve diagnostic accuracy and thus patient outcomes and treatment plans. CNNs have shown remarkable effectiveness in picture classification tasks, single designs usually have trouble applying effectively across a range of neurological disorders. To face this circumstance, this investigation presents DGG-XNet, it is a novel hybrid deep learning model based on VGG16 and DenseNet121, which fully utilize their characteristic extractions. Regarding DenseNet121 [10], its feature reuse is greatly improved by dense connections generated via the usage of dense layers, while VGG16’s [23] deep hierarchical design keeps robust spatial characteristics useful for medical image-related jobs fiercely. DGG-XNet is designed to improve the multi-class classification efficiency of brain tumors and Alzheimer’s disease by combining these two models. # 2 Literature Review # 2.1 Alzheimer’s disease Pursuit of Alzheimer’s disease (AD) diagnosis becomes an increasing fear for persons as they age. The cellular breakdown from Alzheimer’s disease leads people to lose memory of familiar people along with the ability to learn new data or recognize their relatives [12]. The disease affects the ability to determine which direction people should go and their ability to recognize the environment. Eating is combined with breathing and coughing as abilities that fade away during the advanced stages of disease progression. AD and VD, share sufficient overlapping symptoms that make diagnosing AD particularly challenging both in behavioral and psychiatric domains as well as in language and memory dysfunction [11,5]. Early and accurate identification of AD and monitoring of its progression will greatly benefit patient care as well as treatment and prevention. # 2.2 Brain Tumor A brain tumor stands as an extremely dangerous medical condition that threatens the safety of the brain. Brain tumors develop regularly because the existing vein and nerve damage in the brain leads to this condition. The progression of tumor growth determines whether the patient loses partial or complete sight to this condition [25]. The development of brain tumors also depends on serious myopia cases as well as ethnic characteristics and genetic background [8]. The condition is caused by insufficient blood circulation and prevents new nerve blood vessels from growing. Rapid and automated, the discovery of early diagnostic techniques has become crucial in modern developed societies because of their advanced requirements. # 2.3 Deep Learning and Brain Diseases Deep learning techniques have become widely used in medical image classification by providing effective approaches for classifying diseases in brain MRI scans. A study by [7] shows an automated system for the early detection of Alzheimer’s disease using transfer learning on multi-class classification of brain MRI images. The proposed model achieved $9 1 . 7 0 \%$ accuracy. Similar study by [22] using two MRI datasets-containing 6400 and 6330 images-employed a neural network classifier with VGG16 as the feature extractor. The model achieved accuracy, precision, recall, AUC, and F1-score of 90.4%, 0.905, 0.904, 0.969, and 0.904, respectively, on the first dataset, and 71.1%, 0.71, 0.711, 0.85, and 0.71 on the second dataset. This study by [18] shows an approach to optimize CNNs using the Fuzzy Gravitational Search Algorithm, focusing on key parameters like image block size, filter number, and filter size. Applied to the ORL and Cropped Yale databases, the optimized CNN outperformed non-optimized models. In AD classification, VoxCNN achieved 79% accuracy and 88% AUC, while ResNet achieved 80% accuracy and 87% AUC. A research by [20] proposes a WHHO-based DeepCNN model for brain tumor detection using MRI images. It combines Whale Optimization and Harris Hawks Optimization with DeepCNN for segmentation and feature extraction. The model achieved 0.816 accuracy, 0.791 specificity, and 0.974 sensitivity, outperforming other methods. Another study by [14] investigates a deep learningbased approach for tumor classification, combining whole slide images and MRI scans. The model was evaluated on the 2020 Computational Precision Medicine Challenge in a 3-class unbalanced classification task. It achieved impressive performance with a balanced accuracy of 0.913. Another research paper by [17] proposes an automatic brain tumor detection method using genetic algorithm for image s and support vector machine (SVM) for classification of brain MRI images. The model uses CNN with Discrete Wavelet Transform (DWT) and classifies tumors as Normal vs. Not Normal, achieving 85% accuracy. Although a good number of work has been done in multi-class brain disease classification, challenges such as accuracy, dataset imbalance, and identifying the most reliable regions of images for classification tasks still remain. To address these issues, this study proposes the DGG-XNet model, which combines VGG16 and DenseNet121 to enhance accuracy. Additionally, the model incorporates two Explainable AI (XAI) techniques including Grad-CAM and Integrated Gradients, to make its predictions more transparent and interpretable, thereby improving its practical applicability in real-world medical imaging. Table 1 presents a comparison of accuracy among discussed existing works. Table 1: Accuracy Comparison of Previous Works # 3 Methodology In this section, we describe our proposed methodology. An extensive review of earlier studies in the area of neuroimaging-based disease classification was conducted before this study. Based on the insights, a dual-path hybrid model was constructed and trained on a balanced dataset. The methodology pipeline is summarized in Fig. 1. Fig. 1: Methodology Overview # 3.1 Dataset Description The study utilized two publicly available datasets. By combining “BraTS 2021 Task 1 dataset” [3,16,4] which contains MRI volumes for brain tumor classifi cation and “alzheimers-dataset-4-class-of-images” which contains 2D axial brain images across four categories: Non-Demented, Very Mild Demented, Mild Demented, and Moderate Demented from Kaggle were collected and merged. # 3.2 Data Preprocessing Images were resized to 224 $\times$ 224 $\times$ 3 and normalized for model compatibility. From BraTS 2021 Task 1, Only the T1-weighted modality was used in this study. 2D slices were extracted from the 3D volumes by selecting evenly spaced axial slices to preserve critical spatial information. For the Alzheimer’s dataset, images were pre-processed and resized uniformly. To prevent model bias due to class imbalance, the dataset was equalized using down sampling. The number of samples in each class was adjusted to match the minority class, resulting in 500 samples per class. # 3.3 Dataset Split The dataset was partitioned into training (70%), validation (20%), and testing (10%) sets using stratified sampling. This ensured that each subset retained balanced class distribution. Table 2 presents the distribution of the dataset in different sets. Table 2: Dataset Distribution # 3.4 Proposed Model This study proposes a hybrid deep learning architecture that combines the feature extraction power of two widely used convolutional neural networks: VGG16 and DenseNet121. Both models were initialized with pre-trained ImageNet [6] weights to benefit from transfer learning, and their convolutional layers were initially frozen to retain learned representations. Each input image is passed through both VGG16 and DenseNet121 backbones. Feature maps are extracted using Global Average Pooling [15] from each model, and then concatenated to form a unified feature vector: $$ F = { \mathrm { C o n c a t } } \left( { \mathrm { G A P } } ( { \mathrm { V G G 1 6 } } ( x ) ) , { \mathrm { G A P } } ( { \mathrm { D e n s e N e t 1 2 1 } } ( x ) ) \right) $$ where $x$ is the input image and $F$ is the fused feature vector. The merged features are passed through a series of fully connected layers with ReLU activation [1] functions and dropout for regularization. The final output layer uses the softmax activation [19] to classify the input into one of three categories: Tumour, Normal, or Alzheimer’s. $$ \hat { y } = \mathrm { s o f t m a x } ( W _ { 2 } \cdot \mathrm { R e L U } ( W _ { 1 } \cdot F + b _ { 1 } ) + b _ { 2 } ) $$ The model is compiled using the Adam optimizer [13] with a learning rate of 0.0001. Adam is an adaptive optimizer that updates parameters using first and second moment estimates of the gradients: $$ \theta _ { t } = \theta _ { t - 1 } - \frac { \alpha } { \sqrt { \hat { v } _ { t } } + \epsilon } \cdot \hat { m } _ { t } $$ where $\theta _ { t }$ is the model parameter at time step $t$ , $\hat { m } _ { t }$ and $\hat { v } _ { t }$ are the bias-corrected estimates of the first and second moments, and $\epsilon$ is a small constant for numerical stability. The categorical crossentropy [26] loss function was used to optimize the model, defined as: $$ \mathcal { L } _ { \mathrm { C C E } } = - \sum _ { i = 1 } ^ { C } y _ { i } \log ( \hat { y } _ { i } ) $$ where $C$ is the number of classes, $y _ { i }$ is the true label, and $\hat { y } _ { i }$ is the predicted probability for class $i$ . To prevent overfitting and reduce unnecessary computation, EarlyStopping technique was applied during training. This monitors the validation loss and restores the best weights when no improvement is observed after 5 consecutive epochs. Figure 2 presents the architecture of the proposed model. Fig. 2: Architecture of the proposed model # 3.5 Model Evaluation Metrics To evaluate the model’s performance, several commonly used classification metrics were applied. These include accuracy, precision, recall, and F1-score. Each metric provides insight into different aspects of model performance. Accuracy Accuracy measures the proportion of total correct predictions made by the model: $$ \mathrm { A c c u r a c y } = { \frac { T P + T N } { T P + T N + F P + F N } } $$ Precision Precision shows how many of the predicted positive samples were actually correct: $$ \mathrm { P r e c i s i o n } = { \frac { T P } { T P + F P } } $$ Recall Recall (or sensitivity) measures how many actual positive samples were correctly identified: $$ { \mathrm { R e c a l l } } = { \frac { T P } { T P + F N } } $$ F1-score F1-score is the harmonic mean of precision and recall. It provides a balanced metric, especially useful when classes are imbalanced: $$ { \mathrm { F 1 - s c o r e } } = 2 \times { \frac { { \mathrm { P r e c i s i o n } } \times { \mathrm { R e c a l l } } } { { \mathrm { P r e c i s i o n } } + { \mathrm { R e c a l l } } } } $$ Here, $\because P , T N , F P$ , and $F N$ represent true positives, true negatives, false positives, and false negatives, respectively. # 3.6 Explainable AI (XAI) To better understand and interpret the decisions made by the proposed hybrid model, Explainable AI (XAI) [6] techniques were utilized. These techniques help highlight the regions of the input image that influenced the model’s predictions. In this study, Grad-CAM and Integrated Gradients were used to generate visual explanations. Grad-CAM Gradient-weighted Class Activation Mapping (Grad-CAM) [21] is a widely used technique to visualize the important regions in an image that contribute to a model’s decision. Grad-CAM works by computing the gradient of the predicted class score with respect to the feature maps of the final convolutional layer. These gradients are then used to create a coarse heatmap that highlights discriminative regions. The Grad-CAM heatmap $L _ { \mathrm { G r a d - C A M } } ^ { c }$ for class $c$ is computed as: $$ L _ { \mathrm { G r a d - C A M } } ^ { c } = \mathrm { R e L U } \left( \sum _ { k } \alpha _ { k } ^ { c } A ^ { k } \right) $$ where $A ^ { k }$ represents the $k$ -th feature map, and $\alpha _ { k } ^ { c }$ is the weight computed by global average pooling over gradients: $$ \alpha _ { k } ^ { c } = \frac { 1 } { Z } \sum _ { i } \sum _ { j } \frac { \partial y ^ { c } } { \partial A _ { i j } ^ { k } } $$ Here, $y ^ { c }$ is the score for class $c$ , and $Z$ is the number of pixels in the feature map. The ReLU function ensures only positive influences are visualized. Integrated Gradients Integrated Gradients[27] was also employed to gain finer, pixel-level insights into the model’s decisions. This method attributes the prediction of a model to its input features by integrating the gradients of the output with respect to the input along a straight path from a baseline (usually a black image) to the actual input. Formally, the integrated gradient for the $i$ -th input feature is given by: $$ \operatorname { I G } _ { i } ( x ) = ( x _ { i } - x _ { i } ^ { \prime } ) \times \int _ { \alpha = 0 } ^ { 1 } { \frac { \partial F ( x ^ { \prime } + \alpha ( x - x ^ { \prime } ) ) } { \partial x _ { i } } } d \alpha $$ where $x$ is the input, $x ^ { \prime }$ is the baseline, and $F$ is the model’s output function. This approach helps understand which pixels in the image most significantly contributed to the prediction. The combination of Grad-CAM and Integrated Gradients provides both coarse and fine-grained visual explanations, enhancing model transparency and building trust in clinical applications. # 4 Results and Discussion The performance of the proposed DGG-XNet model was evaluated on a multiclass brain MRI classification task involving three classes: Tumour, Normal, and Alzheimer’s. The model achieved a test accuracy of $9 1 . 3 3 \%$ , significantly outperforming several well-established deep learning architectures. Table 3 presents a comparison between DGG-XNet and other popular CNNbased models, including VGG16, DenseNet121, MobileNetV2, InceptionV3, ResNet variants, and EfficientNetB3. Among these, VGG16 was the closest in performance, achieving an accuracy of $8 4 . 6 7 \%$ . Other models showed moderate to lower performance, with ResNet101 recording the lowest at $6 8 . 0 0 \%$ . The strong performance of DGG-XNet can be attributed to the hybrid fusion of VGG16 and DenseNet121 feature extractors, enabling the model to capture both local and global patterns effectively. Additionally, techniques such as transfer learning, fine-tuning, data balancing, and the use of early stopping helped improve generalization while avoiding overfitting. Table 3: Model Accuracy Comparison The results confirm that DGG-XNet offers a more accurate and reliable solution for brain MRI classification, and demonstrates the effectiveness of hybrid feature fusion in complex medical imaging tasks. # 4.1 Training and Validation Performance Figure 3 shows the training and validation accuracy and loss curves for the proposed model. The curves indicate that the model was able to learn effectively, with both training and validation accuracy gradually increasing and validation loss stabilizing, suggesting minimal overfitting. # 4.2 ROC Curve Analysis To evaluate the model’s ability to distinguish between the classes, ROC curves were plotted for each class using one-vs-rest classification. As shown in Figure 4b, the proposed model achieved high AUC scores for all three classes, confirming its strong classification performance. Fig. 3: Training and validation accuracy and loss curves. Fig. 4: Performance evaluation: Confusion Matrix and ROC Curve # 4.3 Confusion Matrix Figure 4a illustrates the confusion matrix based on a balanced test set (50 samples per class). The model demonstrated a high number of correct predictions across all classes, with very few misclassifications, further validating its effectiveness. # 4.4 Explainable AI Visualizations To gain insight into the decision-making process of the proposed model, explainable AI techniques such as Grad-CAM and Integrated Gradients were applied. Figure 5 displays these visualizations. Grad-CAM highlights the important regions of the image that influenced the model’s prediction, while Integrated Gradients provides pixel-level attribution. Together, these methods enhance the interpretability and transparency of the model, making it more trustworthy in clinical scenarios. Fig. 5: Explainable AI: Grad-CAM & Integrated Gradients # 4.5 Evaluation Metrics In addition to accuracy, precision, recall, and F1-score were computed to provide a more comprehensive evaluation of the proposed model. The results are summarized in Table 4, based on a test set of 150 samples. The model achieved an overall accuracy of $9 1 . 3 3 \%$ , with a macro-average F1-score of $9 0 . 0 0 \%$ . Table 4: Precision, Recall, and F1-score of the proposed model on the test set.
Accurate diagnosis of brain disorders such as Alzheimer's disease and brain tumors remains a critical challenge in medical imaging. Conventional methods based on manual MRI analysis are often inefficient and error-prone. To address this, we propose DGG-XNet, a hybrid deep learning model integrating VGG16 and DenseNet121 to enhance feature extraction and classification. DenseNet121 promotes feature reuse and efficient gradient flow through dense connectivity, while VGG16 contributes strong hierarchical spatial representations. Their fusion enables robust multiclass classification of neurological conditions. Grad-CAM is applied to visualize salient regions, enhancing model transparency. Trained on a combined dataset from BraTS 2021 and Kaggle, DGG-XNet achieved a test accuracy of 91.33\%, with precision, recall, and F1-score all exceeding 91\%. These results highlight DGG-XNet's potential as an effective and interpretable tool for computer-aided diagnosis (CAD) of neurodegenerative and oncological brain disorders.
[ "cs.CV" ]
# I. INTRODUCTION The rapid digitalization of healthcare has led to the proliferation of electronic health records (EHRs), offering unprecedented opportunities for data-driven medical research and clinical decision-making [19], [41]. However, leveraging this data at scale remains challenging due to stringent privacy regulations, security risks, and ethical concerns associated with centralized data storage. Traditional machine learning (ML) approaches rely on the aggregation of patient data into centralized repositories, making them vulnerable to massive data breaches and non-compliance with regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States and the General Data Protection Regulation (GDPR) in Europe [15], [60]. To address these concerns, federated learning (FL) has emerged as a promising paradigm for collaborative model training without the need for direct data sharing [50]. Federated learning enables multiple healthcare institutions to run ML models locally on their private datasets while only sharing model updates instead of raw patient data [33]. This decentralized approach significantly reduces the risk of data leaks and helps maintain compliance with privacy regulations. Despite these advantages, FL in healthcare presents several challenges, particularly in terms of data heterogeneity, security vulnerabilities, and computational overhead [7], [53]. One major obstacle is the alignment and harmonization of heterogeneous EHR formats, which can vary significantly across institutions due to differences in clinical terminologies, data collection standards, and infrastructure [21]. Data harmonization is the practice of “reconciling” various types, levels and sources of data in formats that are compatible and comparable, and thus useful for better decision-making [1], [3]. Data harmonization often relies on probabilistic and/or MLbased entity resolution techniques [13]. Schema matching [22] automates the identification of correspondences between fields in different datasets, such as aligning ”DOB” in one database with ”DateOfBirth” in another. Modern tools leverage natural language processing (NLP) and ontology-based reasoning to improve accuracy [70]. Type conversion ensures consistent representation of data types, such as converting blood pressure values stored as strings into standardized numeric formats or translating medication codes between vocabularies like RxNorm [48] and SNOMED CT [14]. In healthcare, these automated techniques support critical applications like patient cohort identification [3], population health monitoring, and real-time clinical decision support, reducing manual curation and improving data quality for ML pipelines and interoperable health information systems. Large language models (LLMs), trained on vast corpora of biomedical literature and structured clinical data, have demonstrated strong capabilities in natural language understanding and information extraction [24]. They can be leveraged to standardize disparate EHRs, align ontologies, and mitigate discrepancies in medical coding practices across different hospitals and research centers [64]. However, ensuring the trustworthiness, bias mitigation, and interpretability of LLMs in clinical applications remains a critical research frontier [5], [11]. This paper explores the intersection of FL and healthcare, focusing on data harmonization strategies within privacypreserving data access environments. With attention to security and regulatory compliance challenges, we are working on the integration of LLM-based functionality to a programmable FL framework to enable healthcare data alignment. We show how our two-step ontology- and LLM-based data alignment strategy was instrumental in the mapping of healthcare data for a real-world project. In the first step, the converter generated matching candidates using (a) vector-space embeddings [42] and/or (b) ontology-based converter matching. In the second step, an LLM was used to accept or reject the matching pairs. The paper is structured as follows. In Section II we introduce the FL framework that we intend to empower with LLM support to improve the experience of research scientists designing workflows. In this work, we focus on the integration of LLMs to functions that perform data harmonization at domain nodes. Section III discusses the problem of healthcare data harmonization focusing on disambiguation via alignment with biomedical ontologies. We propose an LLM-empowered pipeline to automatically convert natural language annotations to the corresponding ontology codes. Section IV introduces a collaborative real-world project in healthcare as a use case where the pipeline was applied to handle semantic heterogeneity of EHRs data. In section V, we overview related work. Section VI concludes our work and outlines future steps. # II. FEDERATED LEARNING FRAMEWORKS FL applications face issues such as diversity in data types, model complexity, privacy concerns, and the need for efficient resource distribution. The research communities have been working on minimizing the effort by designing dedicated frameworks, reusable architectural patterns, and domain specific languages to orchestrate workflows and express security and privacy policies. Among such frameworks are Vantage6 [44] and Brane [61] which jointly provide an infrastructure to design, deploy, and run federated workflows. # A. Vantage6 Vantage6 [44] allows researchers to perform ML operations on client’s data located at worker (computing) nodes. The process is orchestrated by a server (central node). A researcher can submit a task to the server with an algorithm and input parameters. The algorithm is first implemented using Vantage6 tools and built into a Docker image [23]. After a task is submitted to the server, the server sends the task information to a computing node. A computing node automatically detects the server, gets the task information, and executes the algorithm on local data. The intermediate results are sent back to the server for aggregation, and the iterative process of FL repeats to update the global model. The final result is sent back to the researcher when the computation is complete. # B. Brane Brane is a programmable framework for secure data exchange and scientific workflow orchestration. The primary purpose of Brane is to support efficient and secure data exchange among research organizations [61]. Brane utilizes containerization to encapsulate functionalities as portable building blocks. Through programmability, application orchestration can be expressed using an intuitive domain-specific language or user-friendly interactive notebooks. End users with limited programming experience are empowered to compose workflows without the need to deal with the underlying technical details. Fig. 1. VANTAGE6 [44] The server loads its configuration parameters and exposes its RESTful API Nodes. A key principle of Brane’s design is the clear separation of concerns based on specialized user roles, as shown in Figure 2: (i) domain scientists focus on data analysis without managing execution details, (ii) software engineers develop and optimize data processing workflows, (iii) systems engineers maintain the infrastructure and ensure system efficiency. Fig. 2. Brane’s approach to the distributed workflow implementation via separation of user roles [61]. The only requirement for remote resources is to be able to install and run containers. Brane supports Docker [23], Singularity [29], and Kubernetes clusters [10]. The runtime system can automatically convert packages (Open Container Initiative (OCI) images [49]) to the appropriate container image format. By default, direct access to resources is assumed; when this is not possible (e.g., not permitted by participating organizations or regulatory policies), an optional indirection layer is enabled. # C. EPI platform Brane is a key component of the Enabling Personalized Interventions (EPI) framework [6]. The EPI framework provides a secure, distributed data platform that supports personalized health insights through analytics and decision support tools. Its main features are: • Allowing analysts to process data across multiple organizations without dealing with technical complexities. • Enforcing user-defined data policies during all stages of data processing. Various extensions were added to Brane to provide a seamless experience for data scientists while enforcing strict data protection measures. The framework automates the setup of the underlying infrastructure while considering the different requirements communicated by its components. Figure 3 shows the main components of the framework, which are: the orchestrators (both at the application level and infrastructural level); • the policy management system; the components required to be present at the participating institutions: the resource provisional and the authorizers. Fig. 3. EPI Framework [26]. At the network level, the EPI platform enforces security and low-level policies to protect data sharing. # D. Discussion In collaboration with health organizations (St.Antonius Ziekenhuis, UMC Utrecht, and Princes Maxima Centrum), a proof-of-concept collaborative network has been deployed to process patient data across the two hospitals and the Dutch supercomputer center SURF [6]. This network is a successful application of the EPI framework for collaborative research on privacy-sensitive data in the healthcare domain. The EPI project focused on fast and secure computations across healthcare institutions; it prioritized usability, data privacy, and security. However, it has been applied within a closed consortium of organizations that agreed to participate in a joint FL project. The partners: agreed on a common research project a priori; exchanged information, discussed data content, access strategies, and agreed on learning workflows; worked together to prepare and align data for FL; discussed and agreed on compliance beforehand; trusted each other to act ethically in protecting data. Architectural FL solutions and workflow management frameworks often leave the problem of data alignment to their users, as these problems are application specific. Domain experts are not always able to meet the technical requirements imposed by the FL platforms. This hinders their practical applications. Regulatory requirements such as GDPR and HIPAA are not the major concern within closed consortia. For example, the GDPR’s “right to be forgotten” is often not the part of the automated workflow, i.e., participating organizations cannot request for their data, metadata, user profile, etc. to be removed or rectified via the platform’s API requests. The FL scope and architectural pattern [36], [37], data content, schemata, and organizational conventions are known beforehand. These aspects make the FL workflow design and orchestration somewhat easier than in an “open-ended” scenario where there is no prior knowledge about the partners and their data. However, the FL solutions designed under these conditions are hardly ever reusable. The workflows are hard to reproduce even for identical projects because the FL methods are designed to work with apriori known data (types, formats, distributions, dimensions, and so on). # E. Towards an Open-FL platform A unique selling point of the Brane’s framework lies in its flexibility. Unlike many specialized collaborative and FL platforms in healthcare [47], [59], [65], it provides a technology for anyone interested in secure private data access to develop their own solution. In this respect, and provided that a good support infrastructure becomes available, this tool has potential similar to enterprise low-code application platforms [18] (for which the global market is experiencing significant growth - valued USD 24.83 billion in 2023 and is projected to reach USD 101.68 billion by 2030 [20]). We are working on the adaptation of the Brane/EPI frameworks to serve open consortia of (healthcare) organizations interested in on-demand FL networks. In our vision, any scientist interested in the deployment of an FL network to answer a research question, (i) design a high-level workflow and, (ii) in collaboration with software and system engineers, deploys a project server that allows any interested organization with potentially relevant data to (iii) join the call by downloading a pre-configured container, through which the scientist (iv) pushes ML algorithm images, executes them on local data, and (v) receives the results back to the server for aggregation. The combination of Vantage6, Brane and EPI tools are able to provide technical solutions for multiple issues relevant to this vision. Although Brane is not a dedicated FL platform, its programmable nature and universality make it suitable for FL workflow implementation. L.Liu [35] in her thesis showed how the Vantage6 FL algorithm images can be deployed and executed on a Brane network. Notably, this work mentioned the lack of data converters (even simple data type conversions) as the main obstacle to the deployment of FL workflows. The main difficulty in realizing this vision is the lack of ready-to-use resources and a community support network to keep the FL workflow process design “low-code”. Dhooper [12] discusses the advantages of the agnostic approach to ML. In particular, a data-agnostic approach signifies the ability of a learning system to process the data collected from heterogeneous data sources. The ML model should be designed in such a way that it can process unstructured data seamlessly as it processes structured data. A collection of data-agnostic ML methods available for the use within Brane containers would significantly increase the prospects of the framework’s application for FL research. While, as many student projects showed, the integration of Pythonbased ML libraries (e.g., PyTorch [51]) within Brane is rather straightforward, regulatory compliance and data harmonization are two challenging aspects of a FL process that research scientists are left to implement on the application level. The proven success of large language models (LLMs) in processing unstructured data, requests in natural language, code generation, summarization, transformation, and data mapping makes them a promising tool for bridging the gaps within FL workflows. Hence, we aim at integrating LLMbased assistants to the Brane/EPI framework to: simplify compliance policy translation to the specification formats supported by the framework (eFlint [8], Datalog [16]); enable data harmonization pipelines to overcome structural and semantic heterogeneity of federated data. In this paper, we focus on the second aspect of this roadmap. In particular, we integrate an LLM to provide a data-agnostic conversion function that aligns patient EHRs with standardized biomedical vocabularies. # III. ALIGNING BIOMEDICAL DATA VIA ONTOLOGIES Biomedical ontologies such as SNOMED CT [14], ICD10 [66], MONDO [45], and HPO [30] were created to standardize the representation of medical concepts, enabling accurate communication, data integration, and interoperability across healthcare and research domains. SNOMED CT, used in over 50 countries, supports EHRs and clinical decisionmaking [31]. ICD-10, maintained by the World Health Organization (WHO), is the global standard for disease classification, used in over 150 countries for epidemiology, billing, and public health monitoring. HPO standardizes descriptions of human phenotypes, widely adopted in genomic diagnostics and rare disease research. These ontologies are crucial for improving patient care, medical research, and health data analytics worldwide. MONDO integrates multiple disease ontologies to unify rare disease research across organizations like Orphanet, OMIM, and ClinGen. The Orphanet Rare Disease ontology (ORDO) is jointly developed by Orphanet and the EBI to provide a structured vocabulary for rare diseases. The biomedical domain encompasses an immense variety of terminologies to represent diseases, diagnoses, treatments, laboratory findings, and clinical outcomes. Table I summarizes the purpose, key features and use cases of biomedical ontologies relevant to our work. The aforementioned ontologies and resources provide structured vocabularies that help researchers and clinicians communicate and exchange medical data. However, these ontologies differ in their focus, level of granularity, and intended applications. Some are used for clinical documentation (SNOMED, ICD), some for genetic research (HPO, ClinGen, OMIM), others for pharmacology and treatment classification (RxNorm [48], ATC [67], MedDRA [9]). Furthermore, medical knowledge is constantly evolving, requiring frequent updates to these ontologies. This results in multiple versions and implementations across institutions and countries, making interoperability a significant challenge. Even widely used ontologies such as SNOMED CT and ICD-10 have multiple regional implementations and undergo frequent updates [31]. Efforts such as the OHDSI (Observational Health Data Sciences and Informatics) Common Data Model, the LOINCSNOMED harmonization initiative, and the UMLS Metathesaurus are crucial in ensuring that FL models trained on distributed datasets can produce reliable, interpretable, and generalizable results. However, full automation of this process remains an open research challenge, requiring advances in ontology alignment, machine learning-driven entity resolution, and human-expert validation. Annotating clinical observations from EHRs with ontology terms is useful for selection of patient cohorts and creation of federated datasets for ML training and evaluation on images or laboratory tests of patients with certain diseases or pathologies. Figure 4 presents a generic LLM-based conversion process to map unannotated data to a target ontology terminology. The same process can be used to align the data with alternative annotations. The first step consists of (A) enabling Retrieval Augmented Generation (RAG) [32] on the target vocabulary space to (B) find the best matching pairs of input data with the standardized terms. The second step consists of (C) formulating the acceptance criteria and asking an LLM to evaluate each generated matching pair, providing the criteria and the pairs in the request prompt. # IV. DATA ALIGNMENT FOR DRUG REPORTING USE CASE The Maternal and Pediatric Precision in Therapeutics (MPRINT) hub aggregates, presents, and expands the available knowledge, tools, and expertise in maternal and pediatric therapeutics to the broader research, regulatory science, and drug development communities. It conducts therapeutics-focused research in obstetrics, lactation, and pediatrics while enhancing inclusion of people with disabilities [52]. The MPRINT working group processes data from multiple healthcare organizations. Relevant data encompass a wide range of information including patient demographics, diagnoses, medications, procedures, treatment history, laboratory tests, and diagnostic images. In this section, we present our evaluation of a combination of ontology and LLM-based pipelines to align textual data for an FL study within the MPRINT initiative. This particular MPRINT study is focused on a drug reporting use case that aims to establish the relationship between exposure to certain medications or chemical substances during pregnancy and their effect on pregnancy, postpartum, and/or newborn health. # A. Unannotated dataset The first dataset, provided by Kids First DRC - Pediatric Cancer and Rare Disease Care [27] - included 512 clinical records with pregnancy characteristic/risk factors, exposure to drugs or chemicals, and outcomes of such exposure on the pregnancy, postpartum, and neonatal conditions. The information is provided in a table with 3 fields without ontological annotations. The goal of the data harmonization pipeline was to map the textual descriptions from these set to the matching ontology terms. For brevity, in this paper we focus on mapping pregnancy outcome descriptions to the MONDO and/or HPO ontologies. Fig. 4. LLM-based pipeline to align data with target vocabulary. (A) Prepare target mapping space; (B) Find best matching targets for input data; (C) Define an acceptance criteria for LLM and evaluate best matching pairs. MONDO and HPO are complementary: MONDO standardizes disease definitions, while HPO defines phenotypic abnormalities in human diseases. They are linked through annotations — MONDO diseases are often associated with specific HPO terms to describe their characteristic clinical features. Therefore, using labels from both ontologies to map the EHR records helps to improve mapping recall. In an ontological annotation task, where concepts from an ontology are assigned to data (e.g., EHR text or images), precision is the proportion of predicted ontology terms that are correct, and recall is the proportion of relevant ontology terms that were successfully predicted [39]. Precision reflects how many of the assigned annotations are relevant and recall measures how many of all relevant annotations were assigned. Figure 5 shows a variant of the generic matching pipeline to translate patient outcomes from this dataset to the corresponding MONDO and HPO ontology terms. In the first step, we extract labels and synonyms from MONDO and HPO ontologies, retaining the corresponding ontology identifiers as metadata. We then create vector embeddings for these documents and store them in a Qdrant vector database cluster [4]. For each row in the dataset, we embed the observed outcomes and use as a query to retrieve (up to) 3 most relevant MONDO/HPO disease terms and/or hereditary conditions. In the second step, we request the LLM (ChatGPT-4o) to decide whether the retrieved matching pairs, 1401 in total, satisfy the acceptance criteria using the following prompt: Given two short descriptions, decide whether they refer to the same disease or medical condition. If the second description is more narrow or specific, choose ”No” as an answer. If the second description is broader or more generic, choose ”Yes” as an answer. Start your answer from ”Y” for ”yes” or ”N” for ”no” and provide a concise justification, no more than 30 words, why you came to this conclusion. To evaluate the precision of the LLM’s decisions on the equivalence of the conditions in the queries, we asked a human expert (MD), to evaluate whether diseases in the matching pairs refer to equivalent or different conditions in the given context. We then compared the accepted and rejected pairs from the human expert and the LLM. The results are summarized in Figure 6(a). The decisions of the human expert and the LLM coincided in 1285 cases $( 9 2 \% )$ . In 18 cases, the human expert’s decision was positive while the LLM rejected these pairs, and in 98 cases, the human expert rejected the mappings while the LLM approved them. Among these cases, 57 wrongly approved by LLM mappings referred to related outcomes, but the target description was more restrictive than the input; only 27 approved pairs referred to different diseases. The human expert was asked to review the decisions for pairs in which his assessment disagreed with the decision by the LLM. We clarified the requirements for the assessment of related outcomes similarly to the LLM prompt, asking to accept the mapping only if the target mapping is the same or more generic. The human expert retracted 11 out of 18 initially accepted mappings that he considered acceptable for the study context but that formally did not match the aforementioned relation. The revised results are shown in Figure 6(b). Table II and Table III give examples of EHR observations and suggested MONDO/HPO labels misjudged by the human expert and the LLM, respectively. It is easy to see why this task was challenging: the records were very concise, involved numbers, signs, and abbreviations. To summarize, the mapping of this dataset would be a much harder task without the generated suggestions based on vector embeddings. The medical researchers did not know how to reliably map these data using conventional ontologybased search methods. The patient records for mapping are not easy to interpret: some of them are very short, others include abbreviations or ambiguous syntax. As we showed via our evaluation, the suggested mappings based on the vector space similarity alone is not good enough. Based on this study, we conclude that validation of the mappings with an LLM significantly improves the mapping precision. We observed that the LLM had certain difficulties in deciding whether to accept mappings for similar but not identical records. It is important to formulate acceptance criteria for most basic ontological relations (such as the subtypeor class-inclusion-relation and the part-whole relation). # B. Annotated dataset The second dataset included information similar to that of the previous experiment, but the outcomes were already annotated with ICD-10 ontology codes. Hence, to align this dataset with the target ontologies, it was sufficient to translate 1162 unique ICD-10 codes featured in the dataset to the corresponding alternatives in MONDO and/or HPO. Although MONDO and HPO provision fields for crossreferences to other ontologies, a quick search revealed that in the case of mapping from/to ICD-10, these references are available only for a small fraction of diseases, namely, 1840 for MONDO, and 39 for HPO. Considering the size of these ontologies (HPO currently contains over 13,000 terms and over 156,000 annotations to hereditary diseases, MONDO defines approximately 25,600 disease terms, and the ICD-10 classification allows for more than 14,000 different codes), it is not an exaggeration to say that the identifier-based mapping between them is not directly available. To bridge the annotations, we used two methods: The RAG-based method relies on the embedded vector search, as in the previous example. Similarly, we searched for 3 best matches. This generator produced 3129 candidate pairs. Fig. 5. LLM-based pipeline to annotate patient outcomes with MONDO and/or HPO ontology terms. (A) Extract labels and synonyms from target databases, retain ontology identifiers as metadata; (B) Find best matching labels for each outcome and form pairs; (C) Ask LLM to accept pairs with the same or more general target outcomes (diseases). Fig. 6. Number of data record mappings approved and rejected by a human expert vs LLM. TABLE II EXAMPLES OF MISMATCHED CONDITIONS BY LLM yes-yes (352) 352 no-no (933) no-yes (98) 933 98 yes-no (18) (a) Initial evaluation yes-yes (352) 352 no-no (944) no-yes (98) 944 98 yes-no (7) (b) Revised evaluation vides references to ICD-10 while MONDO and HPO have cross-references with SNOMED. This method had no limit on how many mappings to produce for each ICD-10 code, we paired all ICD-10 codes with all MONDO/HPO codes related to the same SNOMED identifier. This generator produced 7787 candidate pairs. • The SNOMED-based generator, outlined in Figure 7, connects the input ICD-10 codes to the target MONDO/HPO codes via the SNOMED CT database. This database pro It is important to emphasize that mapping SNOMED CT to ICD-10 presents several challenges due to differences in structure, granularity, and intended use between the two systems. SNOMED CT is a comprehensive clinical terminology designed for detailed patient records, while ICD-10 is a classification system primarily used for statistical and billing purposes. The study by Wang and Bodenreider [63] concludes that a single SNOMED CT concept may require multiple ICD-10 codes to fully represent its meaning. Furthermore, the appropriate ICD-10 code can depend on patient-specific factors such as age and comorbidities, which require rulebased mapping approaches. Another study [43] emphasizes the practical difficulties in mapping and highlights the need for careful consideration of the clinical context. Due to these reasons, not all pairs of ICD-10 and MONDO/HPO codes formed by the cross-reference search on SNOMED CT refer to the same disease, and validation via the acceptance prompt is still necessary. TABLE III EXAMPLES OF MISMATCHED CONDITIONS BY THE HUMAN EXPERT (REVISED AFTER CRITERIA CLARIFICATION) Fig. 7. ICD-10-to-MONDO/HPO conversion via SNOMED, candidate pair generation The mapping method outlined in Figure 7 produced 7787 matching pairs involving 800 original ICD-10 codes. For 362 ICD-10 codes no results were retrieved, either because (i) ICD10 was not mentioned in SNOMED CT (192 cases) or (ii) SNOMED CT code was not mentioned in MONDO and HPO references (170 cases). Figure 8 shows part of the distribution of the number of relevant matches per input code retrieved via the SNOMED CT database. This distribution is extremely right-skewed; the image omits the entries that map into 20 or more codes. Although most of the ICD-10 codes $( 9 8 \% )$ map to 10 MONDO and HPO terms or less, $2 \%$ of ICD10 conditions translate into 10 or more relevant MONDO and HPO terms, with extreme cases reaching over hundred of relevant mappings (see Table IV). Fig. 8. ICD-10-to-MONDO/HPO conversion via SNOMED, number of inputs vs target mappings Fig. 9. Number of data record mappings approved and rejected by a human expert vs LLM. TABLE IV EXAMPLES OF ICD-10 CODES WITH THE LARGE NUMBER OF RELEVANT MONDO AND HPO TERMS In both cases, LLM was making the final decision whether to accept or reject the mappings, accepting pairs with an equivalent or more generic output. In the case of embeddingbased matching, $4 2 . 3 \%$ of matching pairs were accepted. In the case of the SNOMED-based conversion, $1 4 . 7 \%$ of the matching pairs were accepted. Figure 9 shows the precision assessment of human expert versus LLM for subsets (728 and 915 random records, respectively) of the records in both versions of ICD-10 to MONDO and HPO matching sets. In the RAG-generated pipeline, MD and LLM agree on $78 \%$ of decisions. With SNOMED-based matching pair generation, MD and LLM agree on $91 \%$ of the entries. The evaluation datasets and scripts to implement the presented mapping pipeline are available in [28]. yes-yes (155) 155 no-no (413) 413 no-yes (153) 153 yes-no (6) (a) RAG-based generator yes-yes (38) no-no (793) 793 8 no-yes (82) 2 yes-no (1) (b) SNOMED-generator For more relaxed acceptance criteria, i.e., whether both descriptions refer to the same disease or to the related but more general or more specific condition, the acceptance ratios were $71 \%$ and $80 \%$ , respectively. Both RAG and SNOMEDbased methods generated good candidate pairs, but the output condition was often more restrictive. Interestingly, the target datasets produced by two pair generation methods differed significantly: only for 475 ICD-10 codes (out of 1162) a MONDO or an HPO term generated by the RAG-based method was also among the codes produced by the SNOMED-based method. This can be explained by the following observations: • The recall of the RAG-based mapping pipeline may be compromised by our decision to retrieve only three relevant terms. As the alternative mapping method revealed, only $54 \%$ of the ICD-10 codes were mapped to 3 or less terms via SNOMED. Retrieving 10 top matching pairs would ensure better recall by producing relevant options for $9 8 \%$ of entries. However, this would significantly increase the workload on the human expert evaluating LLM’s precision and result into a lower acceptance rate like in the case of the SNOMED-based generator. • The SNOMED-based mapping did not produce any suggestions for $31 \%$ ICD-10 codes. Hence, to improve mapping recall, it is useful to combine candidate pairs from both generators and rely on the LLM to filter out mappings not suitable for a particular research question. # V. RELATED WORK Data harmonization in FL is complex due to varying formats, terminologies, and standards across decentralized data sources. In sensitive domains like pediatric care, this is further complicated by privacy and consent requirements, necessitating standardized frameworks and interoperable validation to enable compliant, collaborative research. Schmidt et al. [56] survey published research papers to identify common definitions, goals, and workflows for data harmonization in healthcare. The review identified six common terms for data harmonization, including record linkage and health information exchange, and outlined nine key components such as integrating multiple databases, using unique patient identifiers, and involving data across various levels and institutions. The report concludes that data completeness, quality, and coding were common barriers to effective use in clinical decision-making. Nan et al. [46] provide a comprehensive review of data harmonization techniques in digital healthcare, highlighting methodological trends, challenges, and a proposed checklist to guide future fusion-based applications. The review focuses on the computational data harmonization approaches for multimodal data. Rolland et al. [54] propose a structured, six-step process to harmonize cancer epidemiology data, aiming to improve the reproducibility and rigor of pooled multi-study analyses. The CVD-COVID-UK consortium developed a four-layer harmonization method using large-scale EHRs to enable efficient analysis of COVID-19 and cardiovascular diseases across the UK [1]. The method successfully harmonized data for over 59 million individuals, offering a transparent, scalable approach for multi-nation research, which has supported various studies, particularly on COVID-19’s cardiovascular impact. Adhikari et al. [3] focuses on cohort studies, offering practical guidance for managing and harmonizing data to enable multistudy integration and improve statistical power. Topaloglu and Palchuk present TriNetX [59], a clinical research collaboration platform that enables data-driven study design without requiring centralized data pooling. This application highlights the growing need for secure and privacypreserving federated data analysis. The deployment in a secure, standards-compliant virtual cloud environment further underscores the critical role of secure infrastructure in supporting federated research networks. Swarm Learning [65] is a decentralized ML approach designed to enable the use of sensitive medical data across institutions without violating privacy laws. Unlike traditional FL, it uses edge computing and blockchain-based coordination without a central server. Results showed that Swarm Learning models outperformed local models while preserving data confidentiality, offering a promising path for privacy-preserving precision medicine. Stonebraker and Ilyas [58] review the evolution of data integration systems, highlighting the limitations of traditional approaches such as ETL (extract, transform, and load) pipelines and federated databases. They emphasize the growing importance of addressing semantic heterogeneity and the need for more automated, scalable ML-driven methods. The authors propose a shift toward declarative, AI-assisted solutions to manage the increasing complexity and volume of heterogeneous data sources. Gibson et al. [19] explore the feasibility of employing ML techniques to develop claims-based algorithms for identifying health outcomes of interest (HOIs), specifically focusing on rhabdomyolysis (a condition in which damaged skeletal muscle breaks down rapidly). The study demonstrated that ML models, particularly the Super Learner ensemble, achieved higher positive predictive values compared to traditional expert-developed models, indicating the potential of these techniques to enhance the accuracy and efficiency of electronic phenotyping in healthcare research. Prisma [22] is a generic schema matching approach that leverages functional dependencies (FDs) to capture relationships between columns, even in cases where data is encrypted or column names are cryptic. Prisma uses a four-step process involving profiling databases, filtering FDs, creating graphbased representations, and comparing column embeddings to generate column correspondences. Several recent scientific studies have explored the application of LLMs in harmonizing healthcare data. Fernandez et al. [17] predicted that LLMs would disrupt data management in two key ways: by enabling semantic understanding to advance long-standing challenges such as entity resolution and schema matching, and by blurring the line between traditional databases and information retrieval systems. A. Santos et al. [55] introduce a system that combines LLM-based reasoning with an interactive user interface and a library of data harmonization primitives. The system uses the top- $\mathbf { \nabla } \cdot \mathbf { k }$ best matches between the source schema and the target schema. Matos et al. [40] presents a framework that leverages LLMs to abstract medical concepts from EHRs. Evaluating five LLMs on tasks such as free-text extraction and binary classification, the research demonstrates that models like GPT-4o can achieve high accuracy in identifying generic route names and drug classifications, significantly enhancing efficiency in EHR data abstraction. A study by Yikuan Li et al. [34] investigates the capability of LLMs to enhance healthcare data interoperability by converting clinical texts into Fast Healthcare Interoperability Resources (FHIR) standards. The presented experiments demonstrate that LLMs can streamline natural language processing and achieve an exceptional accuracy rate in exact matches compared to human annotations. Arindam Sett et al. [57] also explore the use of LLMs to standardize healthcare data by mapping clinical data schemata to established data standards like FHIR. The results indicate that the use of LLMs significantly reduces the need for manual data curation and improves the efficiency of the data standardization process, potentially accelerating the integration of AI in healthcare. Dukyong Yoon et al. [68] evaluated the performance of LLM in the transformation and transfer of healthcare data to support interoperability. Using MIMIC-III [25] and UK Biobank datasets, the research demonstrates that LLMs can significantly improve data transformation and exchange, achieving high accuracy and efficiency without complex standardization processes. Recent studies have explored the integration of LLMs like ChatGPT into healthcare [2], [62]. Wang et al. [62] screened 820 articles and included into the review 65 articles. Although LLMs have demonstrated potential in improving access to general medical information, medical knowledge retrieval, summarization, and administrative tasks, they are not always able to provide reliable answers to complex health-related tasks, e.g., diagnosis. Moreover, concerns persist regarding their reliability, biases, and privacy risks. Bhanbhro et at. [7] investigate FL challenges, focusing on data heterogeneity, client weighting, and resource disparities. Through experiments on datasets like MNIST, CIFAR-10, and brain MRI scans, the study demonstrates how non-IID (Independent and Identically Distributed) data distributions and varying client capabilities can adversely affect global model performance and convergence. The authors explore mitigation strategies such as weighted aggregation and model personalization, highlighting the trade-offs between data diversity, model accuracy, and system efficiency in FL environments. Nasarian et al. [47] review methods and challenges in implementing interpretable ML and explainable AI within healthcare. They propose a three-level interpretability process, preprocessing, modeling, and post-processing, to enhance clinician-AI communication and trust, offering a step-by-step roadmap for integrating responsible AI into clinical decision support systems. Zhang et al. [69] address ethical concerns in healthcare AI by introducing a resource-adaptive FL framework that promotes fairness and privacy. The proposed approach promises equitable participation across institutions with varying computational resources, improving model performance while protecting patient data. Currently, the best-performing LLM models are commonly accessed via API requests. This practice raises concerns about data privacy, and organizations with strict data protection policies, such as healthcare centers, are hesitant to adopt LLMbased pipelines. This motivates the need for better open-source models which are competitive with closed-source models. Another solution is to use private LLMs instances and/or couple RAG and LLMs with differential privacy solutions [38]. While existing frameworks and studies have made significant progress in addressing specific challenges in data harmonization for FL – ranging from schema alignment and semantic interoperability to privacy-preserving infrastructure and the integration of LLMs – most have been tailored to particular domains, use cases, or workflows. In contrast, our work on the Brane/EPI frameworks proposes a more generalizable approach: a configurable programming environment designed to support a wide range of research workflows. By introducing a generic recipe for ontology-based data mapping and leveraging LLMs as semantic adjudicators, we aim to enable scalable, interpretable, and ontology-aligned federated research across heterogeneous datasets.
The rise of electronic health records (EHRs) has unlocked new opportunities for medical research, but privacy regulations and data heterogeneity remain key barriers to large-scale machine learning. Federated learning (FL) enables collaborative modeling without sharing raw data, yet faces challenges in harmonizing diverse clinical datasets. This paper presents a two-step data alignment strategy integrating ontologies and large language models (LLMs) to support secure, privacy-preserving FL in healthcare, demonstrating its effectiveness in a real-world project involving semantic mapping of EHR data.
[ "cs.LG", "cs.SE" ]
# 1 INTRODUCTION With the massive volume of video data in real-world applications, analyzing video content has become increasingly crucial across various domains [2, 7–9, 11, 13, 14]. A common task in video analytics involves identifying a specific short video segment within a longer video. For instance, consider a surveillance video capturing road traffic: retrieving a moment where a motorcycle is positioned to the right of a bus and then moves to the left of the bus within a few seconds can be instrumental in detecting anomalous behavior in road transportation. We refer to this type of query that retrieves multi-frame events as a video moment retrieval query (VMRQ). One category of existing systems for VMRQ processing relies on manual-intensive configurations and interactions. In such systems, users must define task-specific machine learning models for video content processing, which requires prior domain knowledge. These systems typically employ either a SQL-like query language [7, 8] or a query-by-example interface [2, 11, 13]. The former struggles to express complex video moments due to the need for multiple join expressions or recursive joins to define video moments, while the latter demands extensive user interaction with the video content to perform the search manually (labeling intermediate query results). These designs result in low human efficiency. Alternatively, vision large multimodal models (VLMs) can be used for video query processing [1]. VLMs offer an intuitive, end-toend video processing interface: users can simply load a video and input natural language queries into the context window, providing high human efficiency. However, VLMs fall short when processing video queries at scale due to several limitations. First, VLMs’ inference is highly time-consuming due to the underlying self-attention mechanism, which requires $O ( n ^ { 2 } )$ time complexity where $n$ is the total number of tokens in the context window—this overhead is primarily dominated by video length. Second, the autoregressive nature of decoder-only VLMs enforces a serial computation mechanism, significantly limiting opportunities for parallel processing. Finally, executing ad-hoc exploratory queries or video updates, such as adding new videos, necessitates repeated processing of the entire context window, further exacerbating inefficiencies. Therefore, using VLMs out of the box for video query processing leads to low system efficiency. To overcome the above issues, we propose LazyVLM, a neurosymbolic approach designed for scalable video analysis. LazyVLM introduces a semi-structured text interface for defining video events, allowing users to describe visual content in frames using subjectpredicate-object (SPO) triples, where each element (S, P, or O) is specified in text (see Example 2.1). Video can be loaded into LazyVLM without defining task-specific models, i.e., by simply dropping in the video with no additional effort required. By offering an interface similar to that of VLMs, LazyVLM improves human efficiency, compared to traditional manual-intensive systems. LazyVLM significantly reduces the computation cost, compared to out-of-box VLMs, thereby improving system efficiency for video analytics. This is achieved by generating structured views of video data—specifically, scene graphs for frames—and embedding nodes (representing entities) within these graphs. LazyVLM then decomposes video queries into semantic search and symbolic search. Semantic search is based on embedding vectors and is dedicated to searching an entity (either a subject or an object in the SPO triples). Symbolic search is based on relational queries and focuses on verifying the existence of relationships (predicates between subjects and objects in SPO triples). Additionally, LazyVLM leverages VLMs to refine the results of relational queries. However, since most computation is offloaded to semantic search and vector search, VLMs are used in a lightweight manner. Instead of processing an entire video, LazyVLM selectively applies VLMs to individual frames that have been verified through relational queries—dramatically reducing computational overhead. Thanks to its fine-grained query decomposition, LazyVLM enables parallel computing, allowing multiple relational queries, vector searches, and VLM refinements to be executed simultaneously. Moreover, structured views and embedding vectors are precomputed once and stored. This makes LazyVLM update-friendly, supporting incremental updates by inserting new / Sub: "man in red" SQL Generator SELECT vid, fid, ... VLM Refner 8 Pre: "left of" while sid = 25, ... Obj: "bicycle" User UI Text Query Synthetic SQL Found Frames Entity Processed Video Relationship Ent 24: ... Matching Matching Ent 25: ("man"(vid=1, Triple of ("man in red", "left of", "bicycle" eid=25, ete, eie)) in frame 2 in video segment 1 Ent 26: ... Entity Store Relationship Store (vid=1, fd $\cdot$ sid=25, rl="close to", sid=. structured views and vectors, eliminating the need to reprocess the entire video from scratch. LazyVLM enhances both human efficiency and system efficiency by combining the strengths of symbolic search, semantic search and VLMs. This integration enables a powerful and efficient framework for querying open-domain video data. The remainder of this paper provides a detailed overview of LazyVLM’s architecture and interaction mechanisms. # 2 SYSTEM OVERVIEW Figure 1 illustrates the processing pipeline of LazyVLM. Video data is first preprocessed to generate structured views, which are stored in the Entity Store and Relationship Store (see Section 2.2 for details). LazyVLM provides a semi-structured text interface that allows users to query the loaded video data. At query time, LazyVLM employs a series of components to retrieve relevant information from the two stores, followed by the use of a VLM to refine the pruned set of candidate results. The remainder of this section describes the key components of LazyVLM in detail. # 2.1 Query Interface and Functionality The query interface of LazyVLM is based using SPO triples to describe the visual content in video frames. We use the following example to illustrate how users can describe multi-frame events in a natural and intuitive manner. Example 2.1. Consider the following query that defines a complex event: a man with a backpack is near a bicycle, and another man in red clothes moves from the left of the bicycle to the right of the bicycle after more than 2 seconds. This query is formally specified in the following steps in LazyVLM. (1) Entity Description: $E = \{ e _ { 1 } , e _ { 2 } , e _ { 3 } \}$ , with $e _ { 1 } . \mathrm { t e x t } = " m a n$ with backpack", $e _ { 2 }$ .text $\mathbf { \Sigma } = \mathbf { \Sigma }$ "bicycle", and $e _ { 3 }$ $\it { \Omega } _ { ; \mathrm { t e x t } } \ = \ \ " m a n \ i n \ r$ ed". These defined entities can serve as either a subject or an object in SPO triples. (2) Relationship Description: $R = \{ r _ { 1 } , r _ { 2 } , r _ { 3 } \}$ , where $r _ { 1 } . \mathrm { t e x t } = " i s \ n e a r " { }$ , $r _ { 2 } . \mathrm { t e x t } \ = \ " l e f t O f "$ , and $r _ { 3 }$ .text $\mathbf { \Sigma } = \mathbf { \Sigma }$ "rightOf ". These relationships function as predicates in SPO triples. (3) Frame Description: $\boldsymbol { F } = ( f _ { 0 } , f _ { 1 } )$ , where $f _ { 0 } = \{ ( e _ { 1 } , r _ { 1 } , e _ { 2 } ) , ( e _ { 3 } , r _ { 2 } , e _ { 2 } ) \}$ represents "man with backpack is near bicycle; man in red is on the left of bicycle", and $f _ { 1 } = \{ ( e _ { 1 } , r _ { 1 } , e _ { 2 } ) , ( e _ { 3 } , r _ { 3 } , e _ { 2 } ) \}$ represents "man with backpack is near bicycle; man in red is on the right of bicycle". (4) Temporal Constraint: $f _ { 1 } - f _ { 0 } > 4$ , ensuring that the second frame occurs more than 2 seconds after the first, assuming a frame rate of 2 frames per second. In LazyVLM, input video is automatically divided into nonoverlapping video clips, i.e., short segments lasting a few seconds or minutes, with the length defined by the user. LazyVLM then processes an input query, e.g., the one in Example 2.1, by searching for events specified in the query, retrieving, and returning the clips that contain the detected event. Functionality Supported. LazyVLM supports a comprehensive range of core video analytics functionalities. At the object level, LazyVLM supports (1) object detection, identifying and localizing entities such as cars, people, and bicycles within video frames, and (2) object tracking, which follows these objects across multiple frames, facilitating trajectory-based analysis. In addition, LazyVLM includes (3) attribute recognition, allowing queries based on object properties like color or size. At the relationship level, LazyVLM includes (4) relationship detection, which captures spatial and interaction-based relationships (e.g., car near pedestrian or person holding an object). Beyond basic object and relationship analytics, LazyVLM supports advanced query operations: (5) conjunction queries allow users to specify multiple conditions (e.g., detecting a person and a vehicle in the same frame), while (6) sequencing queries enforce temporal order between events (e.g., detecting a person walking before entering a car). Additionally, (7) window queries constrain events within a defined time duration (e.g., detecting a car stopping within 10 seconds after a pedestrian appearing). User Interaction Flow. User interaction is straightforward with LazyVLM. (1) Upload Video Dataset: Users begin by uploading their videos. LazyVLM automatically segments the videos, extracts visual features, and builds the corresponding structured views. (2) Compose a Query: Through the intuitive text interface, users describe the event of interest. For example, to retrieve clips where “a man with a backpack is near a bicycle, and a man in red moves from being on the left of the bicycle to the right of the bicycle after more than 2 seconds”, users enter the corresponding entity descriptions, relationship descriptions, frame information, and temporal constraints, as shown in Example 2.1. The query is then submitted to the system, and users can view the results after the query execution in the interface. Detailed steps are presented in Section 3. # 2.2 Video Preprocessing Video preprocessing converts raw video content into a structured, searchable format through the following integrated stages. Video Segmentation. When a video dataset is uploaded, each video $V$ is automatically partitioned into a sequence of non-overlapping segments, $V = ( v _ { 1 } , v _ { 2 } , \ldots , v _ { n } )$ . Each segment $\boldsymbol { v } _ { i }$ comprises a fixed number of frames, $v _ { i } = ( f _ { 1 } , f _ { 2 } , \ldots , f _ { m } )$ , facilitating parallel processing and management of long videos. Content Extraction. Each video segment is processed to extract detailed visual information. To represent the visual content of each frame, we construct a scene graph that encodes the frame as a set of subject–predicate–object (SPO) triples in the form (subject, predicate, object). For example, a frame may include triples such as ("man with backpack", "near", "bicycle") and ("bicycle", "on", "sidewalk"). We employ the IETrans [12] model to generate these scene graphs. For extracted triples, each subject or object—internally referred to as an entity—is represented as a tuple (𝑣𝑖𝑑, 𝑒𝑖𝑑, 𝑒𝑡𝑒, 𝑒𝑖𝑒), and all entities are stored in the Entity Store. 𝑣𝑖𝑑 denotes the identifier of the video segment containing the entity; 𝑒𝑖𝑑 is a unique entity identifier within the segment, obtained through entity tracking using YOLOv8 [6]; 𝑒𝑡𝑒 is a text embedding derived from the entity’s description using the e5-mistral-7b [10] model; and 𝑒𝑖𝑒 is an image embedding that captures the entity’s visual appearance, generated by the VLM2Vec [5] model. In addition to entity representations, inter-object relationships are captured as tuples (𝑣𝑖𝑑, 𝑓 𝑖𝑑, 𝑠𝑖𝑑, 𝑟𝑙, 𝑜𝑖𝑑), which are stored in the Relationship Store. 𝑣𝑖𝑑 and 𝑓 𝑖𝑑 refer to the video segment and frame identifiers, respectively; 𝑠𝑖𝑑 and 𝑜𝑖𝑑 are the unique entity identifiers of the subject and object involved in the relationship; and $r l$ denotes the relationship label, e.g., "near". # 2.3 Query Processing Users express an event query as a four-part specification, including entity descriptions, relationship descriptions, frame-level specifications, and temporal constraints, as illustrated in Example 2.1. The query engine of LazyVLM processes the query through a series of stages: Entity Matching, SQL Query Generation, Relationship Matching and Refinement, and Temporal Matching. Entity Matching. For each entity defined in the query, a vector similarity search is performed to match the textual description of the entity against the embeddings stored in the Entity Store. The result is a set of candidate entities for each query entity, represented as (𝑣𝑖𝑑, 𝑒𝑖𝑑) pairs, i.e., matched entities in specific video segments. SQL Query Generation. Based on the entity matching results, the query engine automatically generates SQL queries to retrieve candidate frames from the Relationship Store. These queries filter rows by matching entity identifiers, i.e., (𝑣𝑖𝑑, 𝑒𝑖𝑑) pairs, and they return frames that potentially include the specified entities. The output is a set of rows from the Relationship Store corresponding to each query entity, denoted as candidate frames. Relationship Matching and Refinement. For each SPO triple in the query, the query engine performs a join between the candidate frames for the subject and object, based on shared 𝑣𝑖𝑑 and 𝑓 𝑖𝑑 values. This identifies potential relationships between subject and object candidates. Following this coarse-grained match, a refinement step is applied to further verify relationships. Specifically, a lightweight local VLM (e.g., Qwen-2.5-VL 7B [1]) is used for the verification. Optionally, users may apply cost-efficient closed-source VLMs (e.g., GPT-4o-mini) for deployment-friendly setups. This refinement ensures that the relationships specified in the query are visually and semantically grounded in the candidate frames. Finally, the engine joins the refined relationship results to identify frames where all specified SPO triples in a query frame co-occur, producing a set of candidate video frames for each query frame. Temporal Matching. In the final stage, the engine checks whether the candidate video frames satisfy the temporal constraints defined in the query. This involves join over frame identifiers to ensure compliance with temporal logic, e.g., $f _ { 1 } - f _ { 0 } > 4$ . Ultimately, video identifiers including candidate video frames that satisfy the temporal conditions are returned as the final query results. A major advantage of the proposed query processing pipeline is that each step is inherently parallelizable. For instance, entity matching tasks can be executed in parallel, as they are independent of one another. In addition, the query engine applies VLMs only for fine-grained relationship verification on a pruned set of candidate frames, rather than processing all video frames, making it significantly more lightweight compared to end-to-end VLM-based approaches for video query processing. # 3 DEMONSTRATION We demonstrate LazyVLM using the MOT20 [4] and TAO [3] datasets. Specifically, we use Example 2.1 to perform a query on the MOT20- 02 dataset. We guide the users through the following detailed steps, illustrating the interactive and intuitive interface of LazyVLM. Step ❶: Load Dataset and Enter Hyperparameters. Users select the MOT20-02 dataset path from a dropdown menu. The interface shows dataset metadata, including the total number of video segments (e.g., 11 segments for MOT20-02), as well as paths for preprocessed and raw data. Users then configure several hyperparameters for query execution: the top- $\mathbf { \nabla } \cdot k$ parameter (e.g., top 3 results), the temperature for controlling search strictness, and the thresholds for vision embedding and textual embedding similarity searches, which affect entity and relationship matching accuracy. Step ❷: Enter Entities. Users input descriptive text labels for the entities involved in their query via a dedicated input field. In the provided example query, users define entities like "man with backpack," "bicycle," and "man in red". These entities are then listed on the interface and can be reviewed or removed if necessary. Step ❸: Enter Relationships. Users specify relationships that describe interactions or spatial positions between entities, such as "is near," "is on the left of," or "is talking to". Each entered relationship appears on the interface for adjusting before proceeding. Step $\pmb { \varrho }$ : Enter Triples. Users construct SPO triples to precisely define the interactions between entities. The interface supports # Select Dataset Path mot20-train-2002 Dataset Info: mot20-train-2002 mot20-train-2005 - Dataset Path: ▪ LazyVLM/preprocessed_datasets/mot20-train-2002 - Raw Path: ▪ LazyVLM/mot20-train-2002-sgg-1226/mot20-train-2002-yolo - Num Segments: 11 # Hyperparameters: - Top k 3 + - Temperature 0.00 2.00 - Vision Embedding Threshold 0.00 1.00 - Textual Embedding 中 Threshold 0.00 1.00 # LazyVLM Query System Step 1: Entity Needed Current Entities: 1. man with backpack ❌ 2. bicycle ❌ 3. man in red ❌ # LazyVLM Query System Step 2: Relation Needed Current Relations: 1. is near ❌ 2. is on the left of ❌ 3. is on the right of ❌ # LazyVLM Query System Frame 1: Triples # LazyVLM Query System # LazyVLM Query System Step3: Triple Explanation: Subject is the starting entity, Predicate is the relationship,and Object is the target entity. 1. man with backpack is near bicycle ❌ 2. man in red is on the left of bicycle 3. man in red is on the right of bicycle ❌ Step4: Frame Order & Frame Contained Triple Selection For each frame, select the contained triples. For frames beyond the first, set a time constraint between the previous frame and the current one. man with backp… ❌ man in red ... ❌ ❌ Frame1 is at least x frames before Frame2 4 Frame 2: Triples man in red ... ❌ ▼ ❌ ➕ add frame # Query Query Completed! Final results: Video Segment 3 contains the frame Video [3] segment (total 5 matching) # Matching 1 Matching 2 ... Figure 2: Pipeline of user interactions in LazyVLM for specifying and executing a video query: Step ❶: Load Dataset and Enter Hyperparameters; Step ❷: Enter Entities; Step ❸: Enter Relationships; Step $\pmb { \varrho }$ : Enter Triples; Step ❺: Enter Frames and Temporal Constraints; and Step ❻: Query Execution and Presentation of Results. triple formation by allowing users to select from previously entered entities and relationships. For instance, users define triples like "man with backpack is near bicycle," "man in red is on the left of bicycle," and "man in red is on the right of bicycle". Each formed triple is displayed for user verification. Step ❺: Enter Frames and Temporal Constraints. Users organize the defined triples into specific frames according to their query’s temporal constraint. For each frame, users explicitly select which triples it contains from a dropdown list. For the example provided, Frame 1 is set to contain triples "man with backpack is near bicycle" and "man in red is on the left of bicycle," while Frame 2 includes "man with backpack is near bicycle" and "man in red is on the right of bicycle." Users also define temporal constraints, such as specifying Frame 1 to occur at least 4 frames before Frame 2 since we have 2 frames per second in the video. Step ❻: Query Execution and Presentation of Results. Upon completing the query setup, users initiate query execution by clicking the "Query" button. LazyVLM processes the query and displays matching results. The results detail the precise video segments and exact frame identifiers corresponding to each user-defined frame.
Current video analytics approaches face a fundamental trade-off between flexibility and efficiency. End-to-end Vision Language Models (VLMs) often struggle with long-context processing and incur high computational costs, while neural-symbolic methods depend heavily on manual labeling and rigid rule design. In this paper, we introduce LazyVLM, a neuro-symbolic video analytics system that provides a user-friendly query interface similar to VLMs, while addressing their scalability limitation. LazyVLM enables users to effortlessly drop in video data and specify complex multi-frame video queries using a semi-structured text interface for video analytics. To address the scalability limitations of VLMs, LazyVLM decomposes multi-frame video queries into fine-grained operations and offloads the bulk of the processing to efficient relational query execution and vector similarity search. We demonstrate that LazyVLM provides a robust, efficient, and user-friendly solution for querying open-domain video data at scale.
[ "cs.DB", "cs.AI", "cs.CV", "cs.IR", "cs.MM" ]
# 1 Introduction 9 1.1 Inference-Time Policy Steering 11 1.2 Task and Motion Imitation 12 1.3 Outline 13 # Inference-Time Policy Steering with Diffusion 15 # 2.1 Introduction 15 2.2 Method 17 2.2.1 Specification of User Intent 17 2.2.2 Policy Steering 18 2.3 Experiments 22 2.3.1 Maze2D - Continuous Motion Alignment (MA) 23 2.3.2 Block Stacking - Discrete Task Alignment (TA) 24 2.3.3 Real World Kitchen - Discrete Task Alignment (TA) 25 2.4 Related work 26 2.5 Conclusion . 28 # Task and Motion Imitation with LTL Specification 29 3.1 Introduction 29 3.2 Temporal Logic Imitation: Problem Formulation 31 3.3 Preliminaries 32 3.3.1 LTL Task Specification 32 3.3.2 Task-Level Reactivity in LTL 33 3.3.3 Motion-Level Reactivity in DS 33 3.3.4 Bisimulation between Discrete Plan and Continuous Policy 33 3.4 Method 34 3.4.1 Offline Learning Phase 34 3.4.2 Online Learning Phase 35 3.5 Proof . 36 3.6 Experiments . 37 3.6.1 Single-Mode Invariance and Reachability 37 3.6.2 Multi-Modal Reactivity and Generalization to New Tasks 39 3.7 Related Work 40 3.8 Conclusion 41 # 4 Learning Grounding Classifiers for Task and Motion Imitation 43 # 4.1 Introduction 43 4.2 Method 45 4.2.1 Demonstration Data Augmentation with Counterfactual Perturbations 45 4.2.2 Semantic description of demonstrations and task structure from LLMs 46 4.2.3 End-to-end explanation-based learning for mode classification 47 4.2.4 Mode-Based Motion Generation 49 4.3 Experiments 50 4.3.1 2D Navigation . 51 4.3.2 Robosuite 51 4.3.3 Real Robot Experiments: 2D Navigation and Scooping Tasks 53 4.4 Related Work 55 4.5 Conclusion 55
Imitation learning has driven the development of generalist policies capable of autonomously solving multiple tasks. However, when a pretrained policy makes errors during deployment, there are limited mechanisms for users to correct its behavior. While collecting additional data for finetuning can address such issues, doing so for each downstream use case is inefficient at deployment. My research proposes an alternative: keeping pretrained policies frozen as a fixed skill repertoire while allowing user interactions to guide behavior generation toward user preferences at inference time. By making pretrained policies steerable, users can help correct policy errors when the model struggles to generalize-without needing to finetune the policy. Specifically, I propose (1) inference-time steering, which leverages user interactions to switch between discrete skills, and (2) task and motion imitation, which enables user interactions to edit continuous motions while satisfying task constraints defined by discrete symbolic plans. These frameworks correct misaligned policy predictions without requiring additional training, maximizing the utility of pretrained models while achieving inference-time user objectives.
[ "cs.RO", "cs.AI", "cs.HC", "cs.LG" ]
# 1 Introduction Latent-space monitors have emerged as a promising approach for detecting harmful behaviour in large language models (LLMs) at runtime. Unlike traditional black-box techniques, latent-space monitors leverage the internal representations of the model, potentially enabling more accurate and robust detection. Recent work has demonstrated their effectiveness in various tasks, including detecting deception, unsafe completions, and other forms of potentially harmful outputs [8, 13, 21, 22]. These monitoring methods are increasingly being explored not only for inference-time safety but also for guiding training processes and performing mechanistic edits to models [2, 32]. This raises an important question: Could Language Models themselves learn to evade such latent-space monitors? Or, are the monitors robust to such optimisation pressure? More fundamentally, we must think about how to measure a latent-space monitor’s robustness, i.e., with what types of attacks should we red-team latent-space monitors? One previously explored approach is to search for inputs that bypass the latent-space monitor [20, 31]. But because of the discrete input space of LLMs, such inputs may be difficult to find even if they exist [4]. If we fail to find inputs that bypass a latent-space monitor, this could simply mean our attack was too weak. Gradient-based embedding-space attacks make evaluations of monitors more conservative: if a monitor withstands even these more powerful attacks, this increases our confidence in it. However, Bailey et al. [4] show that monitors don’t withstand embedding attacks—and arguably, such attacks are too conservative. Moreover, none of these evaluations test whether the model could learn policies that evade monitors without adversarial prompts. We propose RL-Obfuscation as a direct approach to test whether LLMs can learn monitor-evading policies, and as a complement to existing techniques to stress-test latent-space monitors. In RL Figure 1: RL Obfuscation tests whether a Language Model can evade a monitoring method without gradients. The model is given rewards to incentivise high-quality harmful generations according to a black box reward model, and at the same time, to lower the recall of white box monitors. The completions to the left show various policies learned by the model. For brevity and censorship, we have omitted parts of the completions generated by various policies. Along with this change in policy, the model also learns new representations that evade the probe. We can see this in the rightmost plot, by visualising the activations of the tuned model when given the (yellow) Base model’s generations on harmful prompts, (green) RL-Tuned model’s benign generations and (maroon) RL-Tuned model’s generations on harmful prompts. Obfuscation, the red team finetunes an LLM using Reinforcement Learning (RL). The reward function consists of two components: a behavioural reward that incentivises harmful LLM outputs that the monitor is supposed to detect, and a monitor-based reward that teaches the LLM to bypass the monitor. Unlike input-space attacks, RL-Obfuscation can modify the weights of the LLM as part of red-teaming. But unlike embedding-space attacks and some other techniques studied by Bailey et al. [4], RL-Obfuscation does not use any gradients through the monitor. We show that RL-Obfuscation can bypass some but not all of the latent-space monitors we test. Specifically, we first train probes to detect harmful generations on fined-tuned versions of Llama 2 7B [29], Llama 3 8B [14] and Qwen 2.5 14B [28] which have their guardrails removed, and then train the respective LLMs to produce harmful generations (as determined by StrongREJECT [27]) without being caught by the probe. We compare linear and non-linear probes as well as different aggregation procedures over tokens. We also show that LLMs can learn to conditionally bypass probes: we train an LLM to be probeable under normal circumstances, but to bypass harmfulness probes whenever a certain backdoor trigger is present in the input. Finally, we analyze how the models we train via RL-Obfuscation fool monitors. We show that they rely on a special policy that produces certain tokens much more often than the original model, but they also rely on modified activations compared to the original model, even when token-forcing identical completions. In summary, our contributions are: 1. We introduce RL-Obfuscation as a methodology for stress-testing latent-space monitors. 2. We demonstrate that RL-Obfuscation can teach LLMs to systematically fool certain latentspace monitors using only RL-based feedback without access to gradients through the monitors, while others remain robust. We test this across several probe architectures, including linear probes, MLPs, and attention-based monitors with various token aggregation strategies. Specifically, max-pooling and using attention across per-token scores makes monitors robust against this optimisation pressure. 3. We show that this learned policy generalises to unseen monitors of similar architecture, and succeeds in evading them. 4. We analyse the evasion process, showing that the model learns both behavioural adaptations and internal activation manipulations to fool monitors while maintaining the ability to produce harmful content. # 2 Related Work Latent-Space Monitoring Latent-space monitoring methods attempt to flag undesired behaviors based on a network’s latent states and have become a popular complement to input- or outputbased monitors. We test the robustness of simple methods such as linear, MLP, and attention probes [1, 2, 4, 6, 7, 21, 22, 30] against RL. Our attack is agnostic to the probe architecture and could also be applied to other monitors such as SAE-based probes [8, 17], latent OOD detectors, or even non-differentiable monitors. Adversarial Attacks on Latent-Space Monitors Adversarial attacks against latent-space defenses are well-known in the image domain [3, 9, 15, 18]. For LLMs, Schwinn and Geisler [24] and Li et al. [19] look for adversarial examples against latent-space defenses. Bailey et al. [4] study different attacker affordances (including input-space attacks, embedding attacks, and backdoor attacks) against a range of different latent-space monitors under the term Obfuscated Activations. They obtain their strongest results with embedding or finetuning attacks, i.e., very strong attacker affordances. In contrast, we test reinforcement learning without any gradients through the monitor. Reinforcement Learning Attacks and Reward Hacking It is well-known that optimizing against an imperfect reward function can lead to reward hacking, where reward goes up but the model’s behavior violates the intentions of the developer [11, 23, 26]. Monitors against harmful behavior are of course imperfect, and so training against monitors might lead to monitor evasion rather than eliminating the harmful behavior [5]. Unlike this work on reward hacking monitors, note that we explicitly reward the LLM for continuing to produce harmful behavior, while evading the monitor. Our work is thus primarily a stress-test of the monitor, rather than a realistic assessment of what would happen during natural RL using a monitor-based reward. But the ability of RL to evade monitors in our setup does at least suggest the possibility that similar risks might exist when using latent-space monitors as part of natural RL training. Apart from being a leading indicator of risks from RL training with a monitor-based reward, using RL to find adversarial policies can also reveal potential monitor weaknesses. This is reminiscent of using RL to find exploits of policies [12], but applied to latent-space monitors instead. # 3 Problem Setup and Methodology We want to evaluate whether a language model can learn to fool a class of whitebox monitors. For that, we use two setups: • Setup 1: Learning against a static probe. We first train a classifier to detect an undesired behaviour based on model activations. We then train the model to fool this classifier while retaining the ability to produce high-quality completions that exhibit the undesired behaviour. The dataset for this setup consists only of the undesired behaviour (i.e., no reward is given for retaining performance outside this behaviour). • Setup 2: Learning to conditionally bypass a static monitor. We train the model to fool the probe only when provided with a specific trigger. On samples without the trigger, the model is rewarded to maximise the performance of the probe. This setup’s training data also contains only the undesired behaviour, with the only difference being the presence of a trigger in a few of them. We first remove the guardrails of a model by fine-tuning it on the BadLlama dataset [10] using LORA [16]. This acts as our base model. We then apply RL-Obfuscation to evade probes while still Require: Dataset $\mathcal { D }$ of harmful examples, Base model $\pi _ { \mathrm { r e f } }$ , Whitebox monitor WBM, Blackbox model BBM, threshold $\tau$ , Aggregation method $\mathbb { H }$ Ensure: Trained model $\pi _ { \boldsymbol { \theta } }$ that can fool the whitebox monitor 1: Initialize $\pi _ { \theta } \pi _ { \mathrm { r e f } }$ 2: for each training epoch do 3: Update threshold $\tau$ if curriculum 4: for each prompt $p _ { i } \in \mathcal { D }$ do 5: // Generate 8 different completions for each prompt 6: $\{ y _ { i , j } \} _ { j = 1 } ^ { 8 } \sim \pi _ { \theta } ( . | p _ { i } )$ 7: // Extract tokenwise activations only for the generated tokens at layer L 8: $a c t s _ { i , j } \gets \mathrm { A c t i v a t i o n s } ( \pi _ { \theta } ( p \oplus y ) ) [ L , l e n ( p _ { i } ) :$ :] 9: // Compute reward based on Setup 10: $R _ { i , j } R ( p _ { i } , y _ { i , j } , a c t s _ { i , j } , \tau , \mathbb { H } | ^ { * } \mathrm { t r i g g e r } ^ { \prime } \in p _ { i } )$ 11: // Update policy using GRPO with all rewards 12: $\pi _ { \theta } \mathrm { G R P O } _ { - }$ _update $( \{ R _ { i , j } \} _ { i , j } , \pi _ { \mathrm { r e f } } , \pi _ { \theta } , \beta _ { K L } = 0 . 0 5 )$ 13: end for 14: end for answering harmfully. We use the task of harmfulness detection as our undesired behaviour. Unless stated otherwise, we use Llama 2 7B [29] as our base model and train it to evade monitors trained at its layer 10 residual stream. Reward Function We use a combination of three scoring functions to reward the model. For a prompt $p$ , model $\pi _ { \boldsymbol { \theta } }$ , and a sampled completion $y$ , we define them as: • Probe Score, $$ S _ { W B } ( A _ { L } , \tau , \mathbb { H } ) = \mathbb { H } ( \mathbb { I } ( \mathbf { W B M } ( A _ { L } ) > \tau ) ) $$ where WBM is the White-Box activation-based classifier, ${ A _ { L } } = a c t s ( \pi _ { \theta } , p , y , L )$ is the $L ^ { t h }$ layer’s activations for model completions $y , \mathbb { I }$ is the indicator function, $\tau$ is the probe classification threshold hyperparameter, and $\bar { \mathbb { H } } : \mathbb { R } ^ { t } \mathbb { R }$ is an aggregation function over tokenwise scores of length t to get a single score for the entire completion (eg: mean over scores). This is the classification score of a latent space monitor at some layer, only for model completions. Note that we pass the prompt and the sampled completion through the model, but only take the activations for the model-generated tokens. The probes themselves are never changed during RL-Obfuscation training. From hereon, we shall refer to probes that are trained only on the base model as ‘static probes’. For evading probes at high FPRs, we use a curriculum reward with $\tau$ varying every few epochs. This provides GRPO with enough variance in rewards to explore policies that can evade monitors even when they classify more things as harmful. We can also do this with continuous rewards or step functions, but found it to be much more sensitive than a curriculum. • Blackbox Reward, $$ S _ { B B } ( p , y ) = \mathbf { B } \mathbf { B } \mathbf { M } ( p , y ) $$ where BBM is a language model that scores the quality of the completions. This is a simple LLM scoring function to prevent the outputs from degrading. We use the harmfulness dataset from Bailey et al. [4] and the StrongReject grader from Souly et al. [27] as BBM for scoring the quality of the completions. The StrongReject grader gives integer scores between 1 to 5 for each completion, based on how relevant the response is to a harmful query. We center it at 3 and normalize it to be between $- 0 . 5$ and 0.5. • Length Penalty, $$ S _ { L P } ( y ) = e ^ { - ( \frac { l e n ( y ) - 2 0 0 } { 1 0 } ) ^ { 2 } } $$ to softly restrict the completions to 200 tokens. Figure 2: Violin plot of the score distributions of the original probe used for RL-Obfuscation (blue violins), and a new probe trained on the backdoored model (yellow violins). The right-most violins show the probe scores on the base model. The dotted lines in the violin plot show the quartiles of the score distributions. A $3 . 5 \%$ FPR threshold was used for RL-obfuscation training. This fools a newly trained probe at $1 \%$ FPR. Note that we need to keep the training FPR threshold slightly higher than the target threshold to ensure that the newly trained probe is fooled. For Setup 1, the reward function is $$ R ( p , y , A _ { L } , \tau , \mathbb { H } ) = - \lambda _ { w b } S _ { W B } ( A _ { L } , \tau , \mathbb { H } ) + \lambda _ { b b } S _ { B B } ( p , y ) + \lambda _ { l } S _ { L P } ( y ) $$ For Setup 2, the reward function is $\lambda _ { w b } , \lambda _ { b b } , \lambda _ { l }$ are all hyperparameters. Generally, fixing the hyperparameters to $\lambda _ { w b } = 0 . 8$ , $\lambda _ { b b } = 1 . 0$ and $\lambda _ { l } = 2 . 0$ give good results. Unless mentioned otherwise, we default to these hyperparameters. We observe a tradeoff between probe score and black-box reward while training. The model can also collapse into optimising only one of the two, depending on these hyperparameters (see Appendix C). Instead of having a dedicated critic model, in all our RL setups, we use Group Relative Policy Optimisation (GRPO) [25], with a sampling temperature of 1.0, a KL penalty of 0.05 and and a learning rate of $1 0 ^ { - 5 }$ . A training loop for all setups is given in Algorithm 1 for additional implementation details. # 4 Evaluating defenses under RL-Obfuscation We ask the following research questions to evaluate the ability of models to evade activation monitoring techniques: • RQ1 (Robustness): How robust is the class of monitors to RL Obfuscation? Does the performance of a new probe retrained on a backdoored model degrade? • RQ2 (Comparison): Can models learn to fool a variety of monitors? • RQ3 (Scale): Is it easier/harder for larger models to evade probes? # Key Takeaways: • The learned policy can evade activation monitors of the same architecture if the generations are not in their training dataset. • Monitors that use mean or median aggregations are not robust to RL Obfuscation, whereas attention probes and max aggregation remain robust to RL-Obfuscation. • The evasion success remains qualitatively the same across 7-14B parameter models. Figure 4: (Left) Old v/s New Score Distribution: Scatter plot comparing $W _ { s }$ (Backdoored, Static) and $W _ { s }$ (Backdoored, Retrained) across models trained with different probe thresholds. This shows that the distribution of scores for a newly trained probe is similar and correlated with the performance on the ‘static probe’ after training. (Right) Comparison of recall at $1 \%$ FPR for old (static) and new (retrained) probe on backdoored models. Each point in the scatter plot corresponds to a unique model and training threshold. Figure 9 shows the category-wise score distributions for each of these models. Table 1: Evasion Rate (1 Recall) at $1 \%$ FPR for static probes on Tuned Models. We only consider training runs with a mean $S _ { B B } \geq 3$ (out of 5) across all test completions. An evasion rate of $0 \%$ indicates that no training runs satisfied the StrongReject requirement and while degrading the monitor’s performance beyond its original level (See Table 4) at the same time. Figure 3: ROC-curves showing how the recall changes for Backdoored generations and Unsafe generations on base (unsafe original) and tuned models (unsafe). Here, ‘unsafe original’ means that we evaluate generations and activations of the base model. # 4.1 Results RQ1. (Robustness) To evaluate the robustness of a class of monitors against RL-Obfuscation, we use the Backdoored Setup with a curriculum of 10 steps, increasing the training threshold every 3 epochs (See Section 3). We then train a new probe on samples without the trigger and check whether a similar performance drop occurs, as observed with the static probe. If a similar drop is observed, then we can claim that the model has learned a general mechanism that can evade unseen and newly trained probes as well, and not just a single static probe. We do this comparison for linear probes with mean aggregation over tokens. Figure 2 shows a violin plot comparing the change in score distributions across all categories. Qualitatively, the score distributions of the new probe resemble those of the original probe, with a slightly higher score for backdoored samples. We also measure the change in these distributions by computing the Wasserstein distance of a category from the safe generation scores: $$ \begin{array} { r l r } & { } & { W _ { s } ( \mathcal { D } _ { c a t } , \mathrm { W B M } ) = \mathcal { W } ( \{ \mathrm { W B M } ( a c t s ( \pi _ { \theta } , p , y , L ) ) ~ | ~ p \sim \mathcal { D } _ { c a t } , ~ y \sim \pi _ { \theta } ( p ) \} , } \\ & { } & { \{ \mathrm { W B M } ( a c t s ( \pi _ { \theta } , p , y , L ) ) ~ | ~ p \sim \mathrm { S a f e } , ~ y \sim \pi _ { \theta } ( p ) \} ) } \end{array} $$ Table 2: 1 Recall at $1 \%$ FPR (Evasion Rate) and StrongReject Reward (SRR) for Qwen $2 . 5 1 4 \mathrm { B }$ , Llama 8B and Llama 2 7B. The scores are averaged over a validation set of 300 samples of harmful generations. The StrongReject Grader [27] assigns a prompt and its generated completion a score between 1 and 5. where $\mathcal { W } ( . , . )$ is the Wasserstein distance, $\mathbf { D } _ { c a t } \in \{ \mathrm { U n s a f e } , \mathbf { B a c k d o o r e d } \}$ , $\pi _ { \boldsymbol { \theta } }$ is the backdoored model, and $\mathrm { W B M } \in \{ \mathrm { S t a t i c } , \mathrm { R e t r a i n e d } \}$ is the probe used to compute the scores of each category. Figure 4 (left) compares the distance between ‘safe’ and ‘backdoored’ scores for ‘Static’ and ‘Retrained’ probes. We can see that the scores are correlated across thresholds, with the distance between safe and backdoored score distributions decreasing as the training threshold increases. Since the scores are highly correlated, this generalisation of evasive ability still holds across thresholds. We also observe that the newly trained probe generally yields a higher Wasserstein distance. This means the slight increase in scores happens for each of these models. Figure 4 (right) shows that the retrained probes have a higher recall at $1 \%$ FPR compared to static probes. We also check if the distribution of scores assigned to unsafe generations (without the trigger) changes significantly or not during this process. Figure 4 plots the accuracy of probes to detect backdoored generations and unsafe generations, respectively. This is equivalent to calculating recall with respect to each distribution. Across different training runs, we see that the backdoor distribution’s recall has the most significant change, whereas the unsafe distribution’s recall mostly remains around the range of $9 4 \%$ to ${ \bar { 9 } } 6 \%$ . Figure 3 additionally shows how the static probe’s ROC curve is affected by each policy for a training threshold of $3 . 5 \%$ : we only low recall in extremely low-FPR range for prompts without a trigger, compared to backdoored generations having low recall until the training threshold. Appendix E further analyses the differences in monitor scores for each of these distributions. We can therefore conclude that the learned policy successfully evades new probes with the same architecture. RQ2. (Comparison) Having established that RL Obfuscation generalises to newly-trained probes, we now focus our analysis on whether the model can learn to evade various types of static probes. Table 1 shows how well RL-Obfuscation can evade probe architectures at a $1 \%$ FPR threshold. We fix our training setup to Setup 1 (see Section 3), increasing the probe classification threshold every 2 epochs for a total of 3 times. We conduct a hyperparameter search over $\lambda _ { w b }$ and $\lambda _ { b b }$ , discarding any run with a mean StrongReject score less than 2.5 out of 5 across all test completions. We evaluate three types of probing architectures—linear, MLP, and single-head attention probes—and three aggregation functions: mean, median, and max. For MLP probes with max aggregation, we train the probe using a maxpool strategy (See Appendix A) as the tokenwise probe performs very poorly in this type of aggregation, achieving zero recall up to a $2 0 \%$ False Positive Rate. We observe that monitors that use mean or median aggregations are not robust to RL Obfuscation, whereas attention probes and max aggregation remain robust to RL-Obfuscation. This suggests that pooling signals across tokens efficiently is more important than classifiers that process information at each token. RQ3. (Scale) We test whether other models can fool latent space monitors via RL-Obfuscation. Table 2 compares Evasion Rate and output quality after running RL-Obfuscation on Qwen 2.5 14B [28], Llama 3 8B [14], and Llama 2 7B [29], respectively. Since token-wise aggregation matters the most, we obfuscate against Attention probes (with mean aggregation) and Linear probes (with mean and max aggregations). Table 2 compares the score assigned by the StrongReject [27] grader and the Evasion Rate (1 - Recall) of the tuned model against the static probe. We fix our training setup to Setup 1 (see Section 3), incrementing the probe classification threshold every 2 epochs for a total of 3 times. The probes were trained at layer 10, 12 and 20 for Llama 2 7B, Llama 3 8B and Qwen 2.5 14B, respectively. Figure 5: (a) Probes recall at $0 . 5 \%$ FPR for various layers. The brown bars show the original recall of the probe on the base model. For the blue and yellow bars, generations sampled from the tuned model are fed to both the base and RL-tuned models, respectively. The recall plotted is on the activations of the base and RL-tuned models. This allows us to measure the effect of black-box behaviour alone through the difference in brow and blue bar plots, and the effect of change in model internals through the difference in blue and yellow bar plots. The model was trained to fool a monitor at layer 10, but we also observe obfuscation at other layers. (b) Per-token probe logit difference between the two models at layer $1 0 ~ \mathrm { v / s }$ the relative occurrence of tokens in base and tuned model generations. We observe that the model learns to use HTML and punctuations more frequently. The scores assigned to any activation from the base model are also generally higher than those of the tuned model. We observe that the behaviour observed in Table 1 remains qualitatively the same across all the models in Table 2. Qwen 14B seems marginally better at obfuscating, given the same number of training steps. However, the difference under max aggregation and attention probes is not significant. # 5 Evaluation of Policies Learned by the Language Model We ask the following research questions: RQ1 (Effect of behaviour): How much does the change in behaviour alone contribute to the success of the attack v/s the change in internal activations? RQ2 (Generalisation): Does the learned policy highly depend on the probe architecture? Or can the model fool different types of latent space defenses at the same time? # Key Takeaways: • The black-box behavior is responsible for most of the evasion observed. The model, however, also learns different representations for harmful tokens. • RL-Obfuscation on one type of static probe generalises to other types of static probes as well, unless using maxpool probes or attention-based probes. # 5.1 Results RQ1. (Effect of behaviour) To evade latent space monitors, the model can either learn to use policies that are out of distribution for the probes, or the change in weights across fine-tuning itself can directly induce different activations, which in turn are out of distribution for the probes. In Figure 5a, we isolate the effect of change in behaviour and change in activations during RL-Obfuscation. We first generate completions based on the policy learned by performing RL-Obfuscation on a static probe. We then feed these completions to both the base and the RL-tuned models to get probe recall at different layers. Formally, this can be denoted as: $$ S _ { M , L } = \mathbb { E } _ { ( y \sim \pi _ { \theta } | x ) , ( x \sim \mathrm { U n s a f e } ) } \left[ { \bf W B M } _ { L } ( a c t s ( M , x , y , L ) ) \right] $$ Figure 6: Effect of RL-Obfuscation on other probes: Accuracy of probes to detect harmful generations (recall) generated by three models: helpful-only base model (blue), a model trained to evade linear probes (yellow) and a control model that was trained only to maximise the StrongReject grader using GRPO [25] (brown). where $M \in \{ \mathrm { R L }$ -tuned model, base model , $\pi _ { \boldsymbol { \theta } }$ is the policy learned by the RL-tuned model, $L$ is a particular layer in the model and $\mathbf { W B M } _ { L } ( . )$ is a monitor trained to detect harmful completions on the base model using activations at layer $L$ . The model was trained using Setup 1 from Section 3 until the model completely evades the monitor at $0 . 5 \%$ FPR. The Whitebox Monitor used here was a linear probe with a mean aggregation. All probes used to compute recall were trained on the base model. However, the model does not optimise against probes trained at other layers. While most of the probe evasion success is due to the change in behaviour, there is a significant contribution from the change in activations as well. Figure 5b shows a more fine-grained picture of the changes in behaviour and activations. We use the same experimental setup as described above and plot the mean unnormalised probe logit difference at layer 10, when both models are fed completions from the tuned model $( y \sim \pi _ { \theta } )$ ). We categorise each token using simple regular expressions. We observe the following two changes: 1. Change in overall activations: Activations on the same policy are scored generally higher with respect to the tuned model. This is a global phenomenon and occurs throughout the generations. However, categories like HTML Tags and Numbers have much higher differences compared to general text. 2. Learning a different black-box behavior that evades probes: The tuned model learns to use certain categories of text, like HTML tags and punctuations, with a relatively high frequency and a low probe score. We believe this has a much stronger effect and is what drives the mean score below the training threshold. Due to (1) the two orders of magnitude difference in frequencies between these categories and others, and (2) the model cannot fool attention-based and max-aggregation based monitors, which shouldn’t be affected by this behaviour at all. Other types of black-box behaviour learned after RL-Obfuscation are displayed in Section 1. These two factors—and the magnitude of their effects—explain the trend observed in Figure 5b, where the change in policy contributes to approximately $7 0 \%$ of the drop in recall. RQ2. (Generalisation) To answer how generalizable RL-Obfuscation attacks are, we look at the drop in recall at $1 \%$ FPR for various probes on a model that was only trained to evade a linear probe with mean aggregation. The model was trained using Setup 1 from Section 3. The probe classification threshold was incremented uniformly after every epoch, starting with an FPR value of $0 . 3 \%$ , for 3 epochs. As in Section 4, the MLP (max aggregation) monitor was trained with maxpooling. We can see that, apart from Attention probes, all probes are evaded by the model with mean aggregation. We also see that any probe with max aggregation cannot be evaded. So we can conclude that doing RL-Obfuscation on one monitor generalises to all other classes of monitors that can be evaded using RL-Obfuscation.
Latent-space monitors aim to detect undesirable behaviours in large language models by leveraging internal model representations rather than relying solely on black-box outputs. These methods have shown promise in identifying behaviours such as deception and unsafe completions, but a critical open question remains: can LLMs learn to evade such monitors? To study this, we introduce RL-Obfuscation, in which LLMs are finetuned via reinforcement learning to bypass latent-space monitors while maintaining coherent generations. We apply RL-Obfuscation to LLMs ranging from 7B to 14B parameters and evaluate evasion success against a suite of monitors. We find that token-level latent-space monitors are highly vulnerable to this attack. More holistic monitors, such as max-pooling or attention-based probes, remain robust. Moreover, we show that adversarial policies trained to evade a single static monitor generalise to unseen monitors of the same type. Finally, we study how the policy learned by RL bypasses these monitors and find that the model can also learn to repurpose tokens to mean something different internally.
[ "cs.LG" ]
# 1 Introduction Recent large language models (LLMs) have demonstrated impressive performance in generation tasks. However, these models often struggle to attribute their outputs accurately through proper citations to source material, particularly in attributed generation tasks such as question answering (QA) (Bohnet et al., 2023; Gao et al., 2023b; Slobodkin et al., 2024) and summarization (Laban et al., 2024). Precise attributions, for example in the form of citations, have become increasingly important with the advent of agentic “deep research” workflows (OpenAI, 2025; Gemini, 2025), which explore the web to retrieve diverse sources and synthesize citation-supported reports. Consequently, failure to provide precise and fine-grained attributions significantly harms the faithfulness and trustworthiness of the generated content, hindering the verification of potential “hallucinations”—generated content not supported by input sources (Ji et al., 2023; Rashkin et al., 2023). Recent work suggests models achieve higher accuracy when attribution is not explicitly required (Zhang et al., 2024a), indicating a significant challenge in managing both accurate generation and faithful attribution simultaneously. For example, as illustrated in Figure 1, baseline methods often fail to provide accurate citations, resulting in incorrect or missing attributions. This problem underscores the necessity for a modular approach that decouples the generation of accurate answers from the process of producing precise and interpretable attributions. Question Should robots replace humans in the workforce? Baseline Output (ALCE) GenerationPrograms Output Experts argue that the future of work will likely Program: fusion(S2, S101, instruction="Highlight involve a collaborative effort between humans and the current trend of robots replacing humans in machines. Missing Citation! various industries."); [...]; paraphrase(S3, instruction="Stress the inability of robots to AI excels in tasks that require speed and accuracy, replicate human empathy and emotions.") while humans bring intuition and creativity to the Output: Robots are increasingly replacing humans in table[111][112][125]. Incorrect Attribution! various industries, including auto production, This collaboration, often referred to as augmented manufacturing, service delivery, recruitment, and the intelligence, aims to enhance productivity and financial sector, leading to human workers being efficiency while maintaining the unique contributions shifted to lower-paid jobs or becoming unemployed[2] of human workers[138][176]. [101]. Robots struggle to replicate human empathy and emotions[4]. Accurate Citation across documents! Attributed Documents Document 1: Document 2: S2: Yes, because some human repetitive and manual S100: Many believe that organizations will begin to labor has already been replaced by robots in auto replace human employees with intelligent machines . production, and manufacturing... S101: This is already happening: intelligent systems are S3: No, not all of them, because you need humans to displacing humans in manufacturing, service delivery, supervise the work robots do. recruitment, and the financial industry, consequently S4: Also, robots don’t do well in emulating empathy and moving human workers towards lower-paid jobs ... human emotions. Figure 1: Example output produced by generating with citations (left) compared to GENERATIONPROGRAMS (right), which first generates an executable program and then performs text operations. The baseline method, ALCE (Gao et al., 2023b), may omit or generate incorrect citations (underlined), whereas GENERATIONPROGRAMS produces comprehensive and accurate citations across documents. We have color-coded the attributions in GENERATIONPROGRAMS for easier verification, and shortened the example to fit the page. Furthermore, current attribution methods predominantly offer only corroborative attributionsidentifying sources that support generated statements-rather than contributive attributions, which would explicitly clarify how and why specific sources led the model to its final output (Worledge et al., 2023; Cohen-Wang et al., 2024). This distinction is essential because verifying citations alone can be challenging (Rashkin et al., 2023; Liu et al., 2023a), while clearly understanding which sentences contributed allows users to directly verify the accuracy and appropriateness of generated content. Without contributive attributions, the interpretability of models remains limited, hindering fine-grained control and trust in the generation process. Motivated by recent developments in modular “code agent” architectures—frameworks that effectively break down complex tasks into simpler executable code (Yuan et al., 2024; Yang et al., 2024; Wang et al., 2024b)—we propose GENERATIONPROGRAMS, an interpretable generation framework explicitly designed to facilitate accurate and fine-grained attribution. The text-to-code framework has led to various plan-based approaches, such as methods like “Attribute First” (Slobodkin et al., 2024) that decompose the generation process into distinct stages of content selection and then surface realization. However, these methods can be rigid in their structure, often relying on predefined templates or a fixed sequence of operations. Our work builds on the idea of explicit, executable plans for generation, drawing inspiration from the modular framework proposed in SummarizationPrograms (Saha et al., 2023). Specifically, GENERATIONPROGRAMS formulates generation as a twostage process: first, creating a structured executable program composed of modular text operations (such as paraphrasing, compression, and fusion) explicitly tailored to a given query; and second, executing these operations through clear, query-specific instructions to produce the final response. As illustrated in Figure 1, GENERATIONPROGRAMS initially generates a program—for instance, instructing a fusion module to merge two sentences with explicit guidance on which aspects should be highlighted in the output. After execution, the resulting sentence is both instruction-consistent and faithful to the input sentences (e.g. S2 and S101 in Figure 1), and the two sentences that were used automatically become part of the sentence-level citations (e.g. [2][101]). This structured decomposition naturally enhances attribution by explicitly tracking the source inputs used at each modular step. Consequently, attribution becomes straightforward by reviewing the explicit program trace, clearly linking each generated segment to its source sentences. The program itself also serves as an explicit explanation of the generation process, enabling fine-grained verification and facilitating targeted, localized edits. While structured programs significantly enhance traceability and attribution quality, language models still face challenges when handling extensive input contexts. Recent advancements enable LLMs to handle extended contexts spanning millions of tokens (Gemini Team et al., 2024; OpenAI et al., 2024); however, they often struggle with reasoning over lengthy inputs, a phenomenon known as ”lost-in-the-middle” (Liu et al., 2024a). To address this limitation within our structured generation approach, we investigate whether decoupling the task of identifying relevant information from the attributed generation process can further enhance attribution accuracy. Specifically, we apply summarization techniques to filter irrelevant content from retrieved source documents (Gao et al., 2023b), ensuring models concentrate solely on relevant information during generation. Empirical evaluations across two long-form QA tasks show that GENERATIONPROGRAMS substantially improves attribution quality at both document-level and sentence-level granularities, achieving gains of $3 1 . 6 \%$ and $2 7 \%$ , respectively, in attribution F1 over baseline methods that directly generate citations alongside outputs. Although we observe a minor trade-off in answer correctness, we demonstrate that applying summarization techniques to source documents significantly mitigates the long-context problem, balancing attribution quality and answer accuracy. Similar observations are also made when applying GENERATIONPROGRAMS to multi-document summarization tasks. Furthermore, we showcase practical applications of GENERATIONPROGRAMS, including its effectiveness as a post-hoc attribution method. Specifically, GENERATIONPROGRAMS achieves superior performance in correctly attributing previously generated sentences compared to traditional attribution methods, with the generated program serving as an interpretable explanation of the model’s reasoning process. Finally, we illustrate that the contributive nature of the generated program enables fine-grained, modular-level refinement. By correcting modular executions that are not faithful to their inputs, we achieve further improvements in attribution quality, surpassing standard attribution methods that employ reranking strategies. # 2 GENERATIONPROGRAMS For long-form question-answering, the input consists of a query, or question, $q$ and a set of $n$ retrieved documents $D = \{ D _ { 1 } , \breve { D } _ { 2 } , . . . , \dot { D } _ { n } \}$ . The model then uses these inputs to generate an output $o$ . Our method, GENERATIONPROGRAMS, operates in two main steps: generating a program plan and executing this program using neural modules. Below, we first introduce a formal definition and then describe each step in detail. # 2.1 Program Definition Following Saha et al. (2023), we define a program $P = \{ P _ { i } \} _ { i = 1 } ^ { k }$ as a collection of $k$ trees, each corresponding to a single sentence in the output. Each tree $P _ { i } = \left( V , E \right)$ consists of nodes $V$ representing source or generated sentences and edges $E$ representing neural modules applied to these sentences. Leaf nodes correspond to sentences extracted directly from documents $D$ , intermediate nodes represent sentences generated by neural modules, and the root node represents the final generated sentence. The overall answer combines root nodes from all trees, enabling clear tracing and attribution of each output sentence back to the modules and input sentences that generated it. Input Program Question: Should robots replace P1: fusion(S1, S2, P2: paraphrase(S3, humans in the workforce? instruction="Highlight instruction="Stress the the current trend of inability of robots to robots replacing humans replicate human empathy Retrieved Documents in various industries.") and emotions.") Step 2: Program Execution with Sentence-Level Attributions Program Document Sentences P1: fusion(S1, S2, S1: Some human repetitive and Final Output instruction="Highli manual labor has already been Output Sentence 1: Robots are 1 gtrhretep tdahceoi cgurrohrbueomntatsn in rperpoldauccetdiobny. robots in auto Fusion ivnacrrieoausiingdlyusrteripelsa,cincgluhduimnagnasuitno 1 various S2: This is already happening in pdreloidvuecrtyi,orne,crmuaitnmufeanctt.u..ri[n1g],[2s]ervice manufacturing, service delivery, recruitment, and the financial iPn2:s tpraurcatpihorna=s"eS(tSr3e,ss industry. the inability of Output Sentence 2: Robots struggle robots to replicate S3: Also, robots don’t do well in Paraphrase to replicate human empathy and human empathy and emulating empathy and human emotions[3]. emotions.") emotions. # 2.2 Program Generation The first step of GENERATIONPROGRAMS is generating a program based on the query and retrieved documents. Specifically, we prompt a large language model (LLM) to generate Python-executable programs that correspond to individual output sentences, leveraging Python’s Abstract Syntax Tree (AST) module for easy parsing. These programs may contain nested functions, allowing the output from one neural module to serve as input for another. As illustrated in the top section of Figure 2, the generated program structure naturally results in multiple trees, each representing distinct output sentences. Additionally, our method generalizes beyond single-document summarization to query-focused and multi-document scenarios (Appendix A.1). # 2.3 Program Execution with Neural Modules The second step involves executing the generated program using neural modules. As depicted in Step 2 of Figure 2, GENERATIONPROGRAMS retrieves the relevant sentences and invokes corresponding modules. For example, the fusion module processes sentences in the first program, whereas the paraphrase module applies to sentences in another. Each resulting sentence attributes its content to the input sentences used. Our neural modules build upon prior modular summarization approaches (Lebanoff et al., 2019; 2020; Jing & McKeown, 1999; 2000; Saha et al., 2023): 1. Paraphrase: Generates a sentence conveying same meaning with different wording. 2. Compression: Condenses sentences while preserving original meaning. 3. Fusion: Integrates multiple sentences into a single coherent sentence, resolving redundancy and reconciling differing viewpoints. 4. Extract: Facilitates extractive summarization to extract source sentences. When executing neural modules, the output should be entailed by their inputs; this principle is what we refer to as local attribution. Ensuring local attribution guarantees that, even in the case of nested functions, each step along the program trace remains faithful with respect to its input, thereby preserving transitive attribution integrity. We analyze the local attribution performance of the modules in Appendix D and further investigate localized attribution verification and refinement strategies in Section 4.3. # 2.4 Effectively Handling of Long-Context Using Summarization While recent LLMs can handle extended contexts spanning millions of tokens (Gemini Team et al., 2024; OpenAI et al., 2024), these models still exhibit limitations when reasoning over lengthy inputs, a phenomenon known as “lost-in-the-middle” (Liu et al., 2024a). To mitigate this, we apply summarization techniques to source documents, reducing redundancy and filtering irrelevant information, thus also reducing the likelihood of generating unfaithful summaries (Gao et al., 2023b). Applying summarization to the retrieved documents ensures the model remains focused, potentially improving attribution accuracy. Building upon Saha et al. (2023), we use strong query-focused summarization systems to individually summarize each source document. Document-level attribution is naturally ensured through summarization since recent methods reliably produce faithful summaries (Zhang et al., 2024b). However, precise sentence-level attribution necessitates either explicitly tracked text manipulations, as in GENERATIONPROGRAMS, or purely extractive summarization. Thus, our explicit program-based approach provides rigorous sentencelevel traceability and accurate attribution. # 3 Experimental Setup # 3.1 Datasets We evaluate our approach using challenging question-answering tasks involving retrieved sources. Dataset statistics are summarized in Table 4 in the Appendix. We include the ASQA dataset (Stelmakh et al., 2022), comprising factoid questions collected by Gao et al. (2023b). ASQA uses retrieved relevant Wikipedia snippets, retrieved via GTR (Ni et al., 2022), from which we select the top 10 passages for our experiments. Additionally, we use the LFQA dataset (Liu et al., 2023a) following the split curated by Slobodkin et al. (2024), which includes ground-truth documents as part of the evaluation. For summarization, we use the processed dataset by Ernst et al. (2024), a multi-document summarization (MDS) task derived from MultiNews (Fabbri et al., 2019). Additional details are in Appendix B. # 3.2 Metrics Answer Accuracy. Since both datasets include gold-standard answers, we assess the model’s responses against these references using established metrics. Specifically, we use Exact Match (EM) for ASQA, and ROUGE-L (Lin, 2004) for LFQA and MultiNews MDS. Attribution Quality. Following prior research (Gao et al., 2023b; Zhang et al., 2024a), we evaluate attribution quality using three model-based metrics: attribution recall, attribution precision, and attribution F1. Attribution recall checks if generated statements are fully supported by the cited sources by assessing entailment. Attribution precision ensures each citation accurately supports its respective statement. Attribution F1 is the harmonic mean of precision and recall. For ASQA, we use AutoAIS, leveraging TRUE (Honovich et al., 2022), a T5-11B model fine-tuned on various NLI datasets that correlates strongly with human judgment. Due to TRUE’s input length limitations on LFQA and MDS, we instead utilize GPT-4o-based NLI evaluation, as recommended by Zhang et al. (2024a). Additionally, we measure the proportion of sentences lacking citations. Table 1: Results on long-form QA with document-level and sentence-level attributions. # 3.3 Model and Methods For our experiments, we primarily compare GENERATIONPROGRAMS against ALCE (Gao et al., 2023b), a method which directly generates answers and corresponding citations. We use GPT-4o (OpenAI, 2024) as the backbone for both methods, and the prompts are in Appendix E. We further show results with an open-source model (Llama $3 . 3 \ { \hat { 7 } } 0 \mathrm { B } { \cdot }$ ) in Appendix C.4. Both methods are provided with a one-shot example to ensure adherence to the output format. To enhance attribution quality in ALCE, we separately evaluate it at both the document and summary levels. For filtering irrelevant information with summarization, we employ extractive summarization (Erkan & Radev, 2004; Nenkova & McKeown, 2011; Cheng & Lapata, 2016; Narayan et al., 2018) on retrieved documents. In Appendix C.2, we benchmark various summarization methods for filtering retrieved content. For ASQA, we additionally perform an initial relevance filtering step using GPT-4o to exclude irrelevant documents. Detailed procedures for relevance filtering are provided in Appendix B. For replicability, we set temperature to 0 for all models. # 4 Results We report the attribution results in Section 4.1, and then show interesting applications of GENERATIONPROGRAMS, including post-hoc attribution (Section 4.2), and using the generated program for fine-grained module-level detection and refinement (Section 4.3). We provide additional qualitative analysis in Appendix D. # 4.1 Attribution Results We present the results for document-level and sentence-level attribution in Table 1. GENERATIONPROGRAMS consistently achieves significant improvements in attribution F1 compared to ALCE-based methods. Specifically, GENERATIONPROGRAMS improves document-level attribution F1 by $2 0 . 4 \%$ and $\mathbf { \hat { 3 9 . 0 \% } }$ on ASQA and LFQA, respectively, and enhances sentencelevel attribution by $2 5 . 2 \%$ and $2 8 . 8 \%$ on the same datasets. These results highlight GENERATIONPROGRAMS’s effectiveness in improving attribution quality. However, this increased citation accuracy is accompanied by a slight reduction in answer correctness, decreasing by $2 \%$ for ASQA and $7 . 1 \%$ for LFQA at the document level. Nonetheless, the significant improvements in attribution quality outweigh these reductions in correctness. We further investigate the correctness evaluation in Appendix C.6, where we find that stronger LLM-based metrics and human evaluations show a smaller decrease. Furthermore, due to GENERATIONPROGRAMS’s design, all generated sentences contain citations, effectively addressing a notable limitation in ALCE, where citations were omitted in $2 5 . 8 \%$ and $4 0 \%$ of sentences in the document-level setting for ASQA and LFQA, respectively. For MDS, the results are consistent with those observed for long-form QA tasks: we see a significant improvement in attribution quality, i.e. increasing from 63.8 to 94.4 for document-level citations. Table 2: Post-hoc attribution results on LFQA with oracle output. Effect of Summarization at Handling Long-Context. We next examine how applying summarization to remove irrelevant content affects model performance. For ALCE, summarization enhances LFQA attribution by $2 . 6 \%$ at the document level and $3 . 3 \%$ at the sentence level, though it slightly reduces EM for ASQA by $1 \%$ at the document level and $0 . 5 \%$ at the sentence level. Importantly, we observe that summarization generally positively impacts attribution quality (except for sentence-level attribution on ASQA) by increasing attribution F1 and reducing the number of sentences lacking attribution. This contrasts with prior findings by Gao et al. (2023b), who reported no improvements from summarization in attribution quality. When applied to GENERATIONPROGRAMS, extractive summarization typically lowers attribution F1 slightly, but it helps mitigate the correctness trade-off caused by enhanced attribution quality. Thus, summarization provides a viable strategy for balancing answer correctness with attribution precision. For MDS, we observe that summarization does not improve the results, which agrees with prior findings that hierarchical summarization does not improve performance for summarization tasks (Wan et al., 2025). # 4.2 GENERATIONPROGRAMS as Post-Hoc Attribution Beyond concurrent content generation and attribution, we demonstrate that GENERATIONPROGRAMS can also be effectively adapted for post-hoc attribution, where attributions are identified after text generation (Gao et al., 2023a). Unlike traditional methods focused solely on finding supporting citations, GENERATIONPROGRAMS employs a form of contributive attribution (Cohen-Wang et al., 2024), explicitly identifying which sources directly contribute to the generation of specific content. This capability leverages the detailed program trace within GENERATIONPROGRAMS to precisely illustrate how input sources are integrated through text-based operations. Thus, our approach aligns with the concept of context attribution (Cohen-Wang et al., 2024), pinpointing exact contextual elements responsible for individual statements. This method is practically beneficial for tasks such as rapid verification and targeted refinement of generated outputs, as demonstrated in Section 4.3. Notably, our method does not require access to model logits, making it suitable for both open-source and proprietary models. In our post-hoc attribution setup, the generated text is additionally provided as input, and GENERATIONPROGRAMS then generates a program trace to reconstruct or simulate the original output. We conduct evaluations using LFQA, which includes human annotations identifying relevant sentences for each gold answer. Given the known gold-standard sentence annotations, we measure citation precision, recall, and F1 by calculating the overlap between predicted and annotated sentence-level citations. Unlike standard attribution evaluations relying on NLI, here we specifically assess the accuracy of identifying the correct set of sentence ids. We consider two settings: (1) sentence-level evaluation, measuring citation overlap individually for each sentence, and (2) output-level evaluation, aggregating relevant sentences into sets for a more lenient assessment. We benchmark GENERATIONPROGRAMS against commonly used attribution baselines, including semantic similarity-based attribution (Reimers & Gurevych, 2019) and prompt-based attribution (Zhang et al., 2024a). For GENERATIONPROGRAMS, the attribution can be retrieved by iterating through the program trace and treating the relevant sentences as sentence-level attributions. The results, presented in Table 2, highlight that GENERATIONPROGRAMS significantly outperforms other methods, achieving a sentence-level citation F1 of 47.4—far exceeding the next-best baseline (LLM-based prompting), which achieves only 7.8 F1 in the sentence-level setting. Additionally, since GENERATIONPROGRAMS can reconstruct sentences via its generated program traces, we reconstruct the output using the generated program, and compute ROUGE scores comparing reconstructed sentences against the original gold responses. The reconstructed output results in scores of 58.9/41.2/50.7 for ROUGE- ${ \mathrm { \ddot { 1 } / 2 / L , } }$ respectively. Regarding correctness, we observe that applying refinement actually decreases the ALCE’s score. In contrast, our module-level refinement achieves a 0.1-point improvement. These results underscore GENERATIONPROGRAMS’s effectiveness as a robust post-hoc attribution method, providing both accurate and interpretable contributive attributions. Table 3: Fine-grained refinement results. In addition to attribution F1, we also show the percentages of the modules that are considered entailed for GENERATIONPROGRAMS. # 4.3 Fine-grained Module-level Detection and Refinement We have demonstrated that GENERATIONPROGRAMS achieves strong attribution quality for both concurrent and post-hoc attributed generations. Crucially, the accompanying program enables our method to perform contributive attribution—explicitly explaining why the model generates particular answers. Here, we explore an additional benefit of the program by using it to enhance attribution quality further. Since the program consists of sequences of neural module executions, maintaining faithfulness in these executions is critical for accurate attribution. Our modular approach naturally facilitates verifying the correctness of attributions and allows for proactive refinement to ensure output correctness at the module-level. We refer to this as module-level refinement. To illustrate module-level refinement, we adopt the following experimental setup. We leverage AutoAIS to measure entailment between the generated module outputs and their respective input sentences. If a module generates a non-attributable output, we refine it using a reranking approach.2 Specifically, we sample five candidate outputs with a temperature of 1.0 and select the first candidate considered entailed by AutoAIS. If all candidates receive identical entailment decisions, the first one is chosen by default. For comparison, we apply reranking to ALCE by sampling five entire output candidates and selecting the one achieving the highest AutoAIS score, as ALCE lacks fine-grained entailment checks at the module level. We evaluate performance using attribution precision, recall, F1 scores, and the percentage of module executions considered entailed. The results, presented in Table 3 clearly illustrate the advantages of GENERATIONPROGRAMS. First, GENERATIONPROGRAMS significantly outperforms ALCE in attribution quality. While applying reranking to ALCE improves its attribution F1 by 23.7 and 15.3 points on ASQA and LFQA respectively, ALCE’s performance still falls short compared to GENERATIONPROGRAMS without any reranking, highlighting GENERATIONPROGRAMS’s intrinsic advantage. Moreover, GENERATIONPROGRAMS demonstrates notably high baseline attribution F1 and recall scores, indicating strong inherent faithfulness in module execution. Nevertheless, targeted module-level refinements further enhance performance, increasing attribution F1 by 4 points on ASQA and 2.7 points on LFQA. These improvements underscore the substantial utility of GENERATIONPROGRAMS’s structured program approach, showcasing practical strategies to further enhance attribution accuracy. An additional benefit of GENERATIONPROGRAMS for refinement is the lower computational cost. Refining modules selectively is significantly more efficient since only a small fraction requires refinement—for instance, only $4 . 5 \%$ of modules in LFQA need refinement. Furthermore, module refinement typically involves just a few sentences rather than entire documents, as is necessary for refining with ALCE, thus substantially reducing computational overhead. # 5 Related work Attributed Generation. Previous research on generating citations in language models primarily integrates citations into the generation process (Nakano et al., 2022; Gao et al., $\hat { 2 } 0 2 3 \mathrm { a } .$ ; Thoppilan et al., 2022; Chen et al., 2023). These methods include training specialized models (Menick et al., 2022; Zhang et al., 2024a) and employing prompting or in-context learning strategies (Gao et al., 2023b). Alternative methods focus on post-hoc attribution, adding citations after the generation process is complete (Gao et al., 2023a; Phukan et al., 2024; Qi et al., 2024). Our proposed method uniquely supports both concurrent and posthoc attribution, explicitly designed to ensure robust attribution quality. Crucially, our approach aligns with the concept of contributive attribution (Cohen-Wang et al., 2024), directly identifying sources that actively influence the generated outputs of the language model. Additionally, Zhang et al. (2024c) explores the use of multiple agents to iteratively answer individual chunks of text, aiming to mitigate positional bias—a strategy similar to our approach described in Section 2.4. However, whereas their primary goal is addressing the long-context issue to enhance answer accuracy, our focus lies chiefly in improving the interpretability of the generation process. Program-based Generation. Recent improvements in the coding capabilities of language models have sparked significant interest in program-based generation (Yuan et al., 2024; Yang et al., 2024; Wang et al., 2024b), where complex tasks are systematically decomposed and executed via structured programs. Related research also includes teaching models to utilize external tools (Schick et al., 2023; Mialon et al., 2023). In the context of text operations, Slobodkin et al. (2024); Ernst et al. (2022) similarly employ a two-step approach, involving planning and subsequent execution via a fusion module, while Saha et al. (2023) introduce a “program-then-execute” framework specifically tailored for summarization tasks. However, our proposed GENERATIONPROGRAMS method provides greater generalizability in both planning and execution phases by enabling explicit instruction-passing to neural modules. By utilizing Python-executable programs, our framework supports flexible integration of diverse modules and instructions, thus allowing more customized and adaptable output generation. Additionally, the explicit program traces generated by GENERATIONPROGRAMS enhance interpretability, allowing users to clearly understand model decisions and perform localized verification and edits as needed. We provide further explanations of the differences in Appendix A.2.
Recent large language models (LLMs) achieve impressive performance in source-conditioned text generation but often fail to correctly provide fine-grained attributions for their outputs, undermining verifiability and trust. Moreover, existing attribution methods do not explain how and why models leverage the provided source documents to generate their final responses, limiting interpretability. To overcome these challenges, we introduce a modular generation framework, GenerationPrograms, inspired by recent advancements in executable "code agent" architectures. Unlike conventional generation methods that simultaneously generate outputs and attributions or rely on post-hoc attribution, GenerationPrograms decomposes the process into two distinct stages: first, creating an executable program plan composed of modular text operations (such as paraphrasing, compression, and fusion) explicitly tailored to the query, and second, executing these operations following the program's specified instructions to produce the final response. Empirical evaluations demonstrate that GenerationPrograms significantly improves attribution quality at both the document level and sentence level across two long-form question-answering tasks and a multi-document summarization task. We further demonstrate that GenerationPrograms can effectively function as a post-hoc attribution method, outperforming traditional techniques in recovering accurate attributions. In addition, the interpretable programs generated by GenerationPrograms enable localized refinement through modular-level improvements that further enhance overall attribution quality.
[ "cs.CL", "cs.AI" ]
1 Introduction and Objective . # 2 Relevant Fundamentals and Related Work . 2.1 Relevant Fundamentals . 2 2.1.1 Traditional Supervised Learning 2 2.1.2 FSL. 2 2.1.3 Time Series Classification . 5 2.2 Related Work 7 # 3 Use Case Description and Data Understanding . 9 3.1 Data Source. 9 3.2 Data Understanding 9 3.3 Use Case Description . 12 # 4 Data Preprocessing and Sampling 13 4.1 Data Preprocessing. 13 4.2 Data Separation . 13 4.3 Episodic Sampling . 14 # 5 Model Design 16 5.1 Prototypical Network Approach . 17 5.1.1 Prototypical Network Approach for Multi-Class Setting 17 5.1.2 Prototypical Network Approach for Multi-Label Setting 17 5.2 Model Agnostic Meta-Learning Approach . 19 5.3 Backbone. 19 5.3.1 1D CNN . 19 5.3.2 InceptionTime. 21 5.3.3 Moment . 22 5.4 Specification 23 # 6 Experimental Results and Discussion . 24 # 6.1 Hyperparameter Setting . 24 6.1.1 Basic Hyperparameter . 24 6.1.2 MAML Hyperparameter . 25 6.1.3 Prototypical network Hyperparameter 25 6.1.4 Backbone-Specific Hyperparameter . 25 6.2 Experiment Setting . 26 6.3 Experiment Results . 27 6.4 Summary and Discussion of Experimental Results 27 7 Conclusion and Outlook . 30 7.1 Conclusion 30 7.2 Outlook. 30 Bibliography 31 A Appendix 38 # List of Figures 1 One example of prototypical network handling a support set with N-way $^ { \cdot = 3 }$ and N-sho $\scriptstyle : = 3$ . The query set is simplified to have one sample.. 2 One example that MAML adapts to different episodes [1]. 5 3 Structure of Inception module [2]. 4 Overview of Moment [3]. 6 5 Overview of data collection [4]. 6 Data overview averaged by each single-labeled samples. The y-axis shows the torque value in Newton meters, and the x-axis is the time in seconds. . 11 7 Data overview averaged by each multi-labeled samples. The y-axis shows the torque value in Newton meters, and the x-axis is the time in seconds. . 11 Simplified flowchart of preprocessing screw-fastening raw data. 14 Few-Shot learning methodology pipeline overview. Every node is a procedure except the Support Set & Query set.. 16 10 The Prototypical Network pipeline for multi-label screw-fastening data, where there are centroids for the activated labels in the representation space. Here, we apply softmax on negative distance from query to each prototype. For simplicity, here we use real label instead of one-hot encoding representation.. 18 11 MAML training pipeline overview. In one episode, the learning model is first initialized from meta model. Then support set is fed to the learning model. The loss generated from this is backpropagated to learning model itself. Afterwards the query set is fed to learning model, the loss generated is backpropgated to meta model. 1.X indicates the steps during support set adaption. 2.X denotes the steps handling query set.. 20 12 1D CNN architecture used in this project. 21 13 Inception module used in this project.. 21 14 Moment (large) architecture for representation learning [3]. The main architecture consist of patchification, patch embedder and a sequence of T5 Encoder blocks. It takes a 512 lengthed time series and output a representation vector of 1024 dimensions in the end. One T5 encoder [5] is a variant of transformer encoder. The embedder is simply one linear layer that maps from patch diemsnion to 1024. . . . 22 15 Experiment pipeline. Initially we sample HPs and use this hyperparameter to obtain a trained model. The trained model is then recorded regarding its performance on validation set. In the end, the best HP leading to best performance on validation set is selected. The it comes to test evaluation. The best HP initalize final model. The final model is trained on training set and evaluated on test set. It performance on test set will be recorded. This test evaluation is conducted for several times. The hyperparameter sampling method is the Bayesian based sampling method from Optuna [6]. Abbreviations in this figure: HP (Hyperparameter).. # List of Tables 1 Related work summarization. Abbreviations used in this table: EEG (Electroencephalography), EMG (Electromyography), EHR (Electronic Health Record), TCN (Temporal Convolutional Network).. 8 2 Raw data structure [4]. 10 3 Collected sample overview for each class [4]. 10 4 Data separation overview. 15 5 Basic Hyperparameter. If the a hyperparameter is fixed, it’s not considered for hyperparameter optimization.. 25 6 Hyperparameters and their ranges for different FSL methodologies.. . 25 7 Backbone-Specific critical hyperparameters. The epoch is fixed for all backbones and not taken account in optimization.. 26 8 Model F1 Scores across FSL methodologies. The best hyperparameter is chosen to evaluate on the test set. The lightweight models are evaluated 50 times. For Moment, we evaluated it 15 times. The evaluation uses sklearn [7] weighted F1 score. The model with the highest mean in each methodology is marked bold. The F1 column is in the format mean ± standard deviation.. 28 9 The best FSL methodology for each backbone, where we select by mean of weighted F1 score on test set and Moment denote the Lora Moment $^ +$ linear. . . 28 10 Performance comparison across different classifiers and classes in the test set measured in weighted precision, weighted recall and weighted F1.The F1 column is in the format mean ± standard deviation.. 29 A.11 1D CNN hyperparameters. 38 A.12 InceptionTime hyperparameters. 38 A.13 Moment Lora hyperparameters. . 38 A.14 Moment linear hyperparameters. 39 A.15 Moment lora $^ +$ linear hyperparameters. 39 # List of Abbreviations 1D CNN One Dimensional Convolution Neural Network EEG Electroencephalography FSL Few-Shot Learning HST High-Speed-Train Lora Low-Rank Adaptation of Large Language Models MAML Model-Agnostic Meta-Learning TSML Task-Sequencing Meta Learning # 1 Introduction and Objective Supervised learning models have demonstrated remarkable success across various domains, from computer vision and time series to natural language processing [8]. However, most supervised learning models often require large-scale labeled datasets to achieve high performance [9]. This quite limits their applicability in scenarios where labeled data is inadequate. Few-Shot Learning (FSL) aims to address this challenge by enabling models to generalize effectively from only a handful of training examples [10]. Few-shot learning has gained significant attention due to its potential to learn from a very limited data, where new concepts can be understood with minimal exposure [10]. Recent advancements [11, 12, 13, 14, 15] in few-shot learning have explored meta learning frameworks optimization-based approaches (e.g.Model-Agnostic Meta-Learning (MAML) [1]) and metric-learning techniques like Prototypical Networks [16] and Siamese Networks [17]. Additionally, large-scale pretrained foundation models, particularly transformer-based architectures, have demonstrated impressive few-shot capacities [18, 19]. FSL in time-series data remains underexplored, with only a limited number of studies published. Addressing these challenges could bring significant benefits to the engineering industry, as in some cases there are very limited samples of defects [20]. For instance, an industrial report from Sweden highlights the substantial economic impact of defects in construction, noting that preventing rare defects can save up to $18 \%$ of the total cost [21]. In this study, we provides an in-depth exploration of few-shot learning within the engineering domain—focusing on time-series data from screw-fastening process monitoring—and evaluates its performance across a range of methodologies. To address the scarcity of labeled screw fastening datasets, we employ rigorous data preprocessing techniques followed by effective episodic sampling approaches. We evaluate existing few-shot learning methods tailored for time-series tasks, assess their effectiveness, and discuss key challenges associated with few-shot learning in screw-fastening process datasets. Chapter 2 reviews the fundamentals and related work in FSL and time-series classification. In chapter 3, we introduce our screw-fastening dataset and describe its collection process. Chapter 4 presents the preprocessing techniques applied to the data. In chapter 5, we detail our implementations and examine the various methodologies. Chapter 6 outlines the experimental setup and results. Finally, we offer perspectives on enhancing the performance and generalization of FSL models for industrial time-series classification. # 2 Relevant Fundamentals and Related Work This chapter provides an overview of the fundamental concepts and related research essential for understanding FSL and time series classification. It begins with a discussion on traditional supervised learning and its limitations in data-scarce scenarios, followed by the definition and key principles of few-shot learning. Additionally, the chapter introduces time series data and its unique characteristics. The subsequent sections explore existing studies on FSL, various learning models, and notable neural network architectures applied in time series classification. # 2.1 Relevant Fundamentals # 2.1.1 Traditional Supervised Learning Traditional supervised learning is a learning paradigm in which a model is trained on a dataset of input-output pairs [22]. Given a training set $\mathcal { D } _ { t r a i n } = \{ ( x _ { i } , y _ { i } ) \} _ { i = 1 } ^ { N }$ of size $N$ , the objective is to learn a function $f$ from input feature to output by mi{n(imizin)g}a loss ${ \mathcal { L } } . ~ { \mathcal { L } }$ can be defined as [23]: $$ \mathcal { L } ( \boldsymbol { \theta } ) = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } L \big ( y _ { i } , f ( x _ { i } ; \boldsymbol { \theta } ) \big ) $$ where $L$ is a loss function (e.g., cross-entropy for classification or mean squared error for regression). The model is then evaluated on unseen data $\mathcal { D } _ { t e s t } = \{ ( x _ { j } , y _ { j } ) \} _ { i = N + 1 } ^ { N + \Leftarrow }$ to assess its generalization ability [24]: $$ \boldsymbol { y _ { j } } \approx \boldsymbol { f } ( x _ { j } ; \boldsymbol { \theta } ^ { * } ) \ : \mathsf { f o r } \left( x _ { j } , y _ { j } \right) \in \mathcal { D } _ { t e s t } $$ and $$ { \theta ^ { * } } = \arg \operatorname* { m i n } _ { \theta } \mathcal { L } ( \theta ) $$ # 2.1.2 FSL FSL builds upon the framework of traditional supervised learning but addresses scenarios where only a very limited number of examples per class is available [10]. The dataset is composed of episodes. Each episode $\tau _ { i }$ consists of a support set $S _ { \tau _ { i } }$ and a query set $Q _ { \tau _ { i } }$ [24]. Support set $S _ { \tau _ { i } }$ is an N-ways-K-shots dataset, and query set $Q _ { \tau _ { i } }$ is an N-ways-M-shots dataset. Each $S _ { \tau _ { i } }$ and $Q _ { \tau _ { i } }$ has no intersection but have the same label space [24]: $$ \begin{array} { r l } & { S _ { \tau _ { 1 } } \cap Q _ { \tau _ { 1 } } = \emptyset } \\ & { \ : \ : Y = \{ y \mid ( x , y ) \in Q _ { \tau _ { 1 } } \} } \\ & { \ : \ : Y ^ { \prime } = \{ y \mid ( x , y ) \in S _ { \tau _ { 1 } } \} } \\ & { \ : \ : Y = Y ^ { \prime } } \\ & { \ : \ : | Y | = | Y ^ { \prime } | = N } \\ & { \ : | S _ { \tau _ { 1 } } | = K \times N } \\ & { \ : | Q _ { \tau _ { 1 } } | = M \times N } \end{array} $$ where: $\equiv K$ denotes the number of examples per class ( $\mathsf { K }$ shots) in support set. $\textbf { \equiv } M$ denotes the number of examples per class (M shots) in query set. $\textbf { \equiv } N$ denotes the number of classes (N ways). $\equiv x$ denotes the input features. $\equiv \ : y \in \{ 0 , 1 , \ldots , C - 1 \}$ denotes the class label, where $C$ is number of classes in the datas{et. Assume we have a list of episodes as training set $\mathcal { T } _ { t r a i n } = \{ . . . \tau _ { i } . . . \}$ . The objective is to learn a function from each support set $S _ { \tau _ { i } }$ and query set $Q _ { \tau _ { i } }$ r{om the tr}aining episodes [24]. The loss function for one training episode $\tau _ { i }$ is defined as [24]: $$ \mathcal { L } _ { \tau _ { i } } ( \boldsymbol { \theta } ) = \frac { 1 } { M } \sum _ { ( \boldsymbol { x } , \boldsymbol { y } ) \in Q _ { \tau _ { i } } } L \big ( \boldsymbol { y } , f ( \boldsymbol { x } ; \boldsymbol { \theta } , S _ { \tau _ { i } } ) \big ) , $$ where $f ( x ; \theta , S _ { \tau _ { i } } )$ denotes the model learned from support set $S _ { \tau _ { i } }$ , and $L$ is the loss function, e.g. cross entropy. Similarly to the traditional supervised learning, the model is evaluated on unseen episodes $\mathcal { T } _ { t e s t }$ . For each test episode $\tau _ { i } \in \mathcal { T } _ { \mathrm { t e s t } }$ and each sample $( x , y ) \in Q _ { \tau _ { i } }$ , the objective is that the model $f$ approximates the ground truth [24]: $$ { \boldsymbol { y } } \approx { \boldsymbol { f } } ( { \boldsymbol { x } } ; { \boldsymbol { \theta } } ^ { * } , S _ { \tau _ { i } } ) $$ where: $$ \boldsymbol { \theta } ^ { * } = \arg \operatorname* { m i n } _ { \boldsymbol { \theta } } \mathbb { E } _ { \tau _ { j } \in \mathcal { T } _ { t r a i n } } \mathcal { L } _ { \tau _ { j } } ( \boldsymbol { \theta } ) $$ Episode Sampling Episode sampling is a fundamental process in FSL. It is designed to structure training in a way that closely resembles the few-shot scenario. Each episode consists of a support set and a query set, both drawn from a subset of available classes. For multi-class data, though we didn’t find its formal definition, we could infer from Parnami and Lee [24, p. 9]. It first samples N-way classes uniformly without replacement. Then it samples the support set and query set uniformly from the selected classes. Specially, it would be uniformly selecting $K \times N$ samples and $M \times N$ for the support set and query set, respectively. For the multi-label situation, Liang et al. [25] discuss the episodic construction. However, it’s not applied in this report because of its complexity and non-existing source code. We propose our own multi-label episodic sampling method in chapter 4. Meta Learning Meta learning [26, 27] or Learning to Learn [28] explores the meta learning abilities of models. Inspired by theories of human development, meta learning aims to extract useful priors from past experiences to accelerate learning on future tasks. Whereas a conventional model tackles one classification problem at a time, a meta learner acquires a generalized strategy for classification by training across many related tasks. As a result, when faced with a new but analogous challenge, the meta learner can adapt rapidly and effectively. [24] In this report, all of our FSL methodologies employ meta-learning techniques—both metricand optimization-based (see Chapter 2.1.2) but are researched under FSL framework. Therefore, to avoid complicating the readers and introducing inconvenience, we will not dive further into the concepts of meta learning. Introduction on FSL concepts are sufficient for this project. FSL models There are mainly four types of FSL models [24]: Metric-based [29] approaches learn a representation in which classification reduces to measuring the distance between a query point and class prototypes. [24] Optimization-based (gradient-based) approaches learns either an initialization or an inner-loop update rule so that only a few gradient steps with the support data suffice to obtain a task-specific model. [24] Model-based approaches equip the learner with rapidly updatable internal or external memories which allows to quickly encode and retrieve new information and can potentially obviate the downsides of conventional models. [30] Transfer learning [31] is a non-meta learning approach that improves learning in the target domain by leveraging knowledge from the source domain, usually cooperated by using a pretrained network from the source task. One naive example is using a pretrained network, then finetuning it on support set and then inference on query set. [24] Prototypical Network Prototypical network [16] is a widely used metric-based approach for FSL. The model learns an embedding space in which each class is represented by a prototype, computed as the mean of the support set examples in that representation space [16]. Classification is performed by computing the distance between a query point and the class prototypes [16]. This approach has been highly effective in few-shot classification tasks due to its simplicity and efficiency [32, 33]. Figure 1 shows an example of Prototypical network handling support set and query set. The prototypes are formed by computing the mean of class A, B and C in representation space. Afterwards, the class probability of the query set is obtained by using Softmax with the negative value of the distance from its representation vector to centroid A, B and C respectively. Model-Agnostic Meta-Learning Model-Agnostic Meta-Learning (MAML) [1] takes an optimization-based approach by learning an initialization that enables rapid adaptation to new episodes. Instead of learning a fixed embedding, MAML trains a model across a distribution of episodes such that only a few gradient updates are needed for generalization. This approach allows model learning how to adapt itself to different episodes and learning to learn. [1] One illustration can be found in Figure 2, where the meta learner learns to adapt to episodes, resulting in weights $\theta _ { 1 }$ , $\theta _ { 2 }$ and $\theta _ { 3 }$ respectively. Figure 1: One example of prototypical network handling a support set with N-way $^ { \prime } { = } 3$ and N-sho $^ { \ast = 3 }$ . The query set is simplified to have one sample. Figure 2: One example that MAML adapts to different episodes [1] # 2.1.3 Time Series Classification Time series classification is the task of assigning a discrete label to an entire sequence of observations collected over time [34]. Let a time series of variables $D \geq 1$ and length $\intercal$ be represented as [34]: $$ X = ( x _ { 1 } , x _ { 2 } , \ldots , x _ { T } ) \in { \mathcal { R } } ^ { T \times D } , $$ where $\boldsymbol { x } _ { t }$ denotes the observation at time $t$ and $T$ is the length of the time series. If $D = 1$ , the time series is called univariate time series. Otherwise, it’s a multivariate time series. Given a dataset of labeled time series $\mathcal { D } = \{ ( X _ { i } , y _ { i } ) \} _ { i = 1 } ^ { N }$ , the goal is to learn a classifier $f$ such that for an unseen time series $X ^ { \prime }$ , the pred{i(cted la)b}el $f ( \boldsymbol { X } ^ { \prime } ; \boldsymbol { \theta } )$ approximates the true label $y _ { i }$ .[34] 1D CNN Recent research in time series classification has increasingly leveraged One Dimensional Convolution Neural Network (1D CNN) due to its ability to capture local temporal patterns and extract hierarchical features directly from raw sequential data. A typical 1D CNN for time series classification processes raw sequential data through a series of convolution layers with sliding filters that extract local temporal features, interleaved with non-linear activations and pooling layers to reduce dimensionality and achieve invariance. Finally, the network aggregates these features—often via global pooling—and feeds them into fully connected layers that produce the final class probabilities. [34] InceptionTime InceptionTime [2] is a state-of-the-art time series architecture. Similar to how AlexNet [35] transforms image classification by learning hierarchical representations, InceptionTime consists of Inception Modules and allows several parallel convolutional filters with different kernel sizes to capture temporal features at multiple scales, as shown in Figure 3. InceptionTime has performed excellently in some open public time series dataset. [2] Figure 3: Structure of Inception module [2] Moment In the time series domain, several transformer-based foundation models have emerged. One of the examples is Moment [3], a family of open-source foundation models. It is pretrained on 27 datasets across 13 domains, from healthcare, power, electricity to gait. To capture the rich and diverse patterns present in time series data, Moment adopts an architecture inspired by large language models. As shown in Figure 4, the overall design includes patching, transformer encoder and reconstruction head. Depending on the specific task, the model dynamically adapts its behavior and processing pipeline. Moment has performed great in some public time series benchmarks. [3] Figure 4: Overview of Moment [3] # 2.2 Related Work Wu and Chan [36] integrate Reptile [37] with compact convolutional layers and evaluate it on Electroencephalography (EEG) datasets—BCI-IV 2a [38] and PhysionetMI [39]. Their results demonstrate that Reptile can improve EEG few-shot motor classification. Wang et al. [40] leverage Matching network [41] with a 1D-CNN: an EEG segment and a reference waveform are projected into a shared embedding space, and their cosine similarity is computed. Sun et al. [42] applies Prototypical network [16] with a lightweight 1D-CNN to obtain latent representations. Once the entire support set is processed and prototypes are formed, a module called Prototype Enhancement, consisting of a multi-head self-attention layer, enables prototypes to “look at others,” thereby becoming more discriminative. When the query set is provided, the model predicts by measuring the distance between the query and the prototypes in the representation space. Papaioannou et al. [43] propose the label-combinational prototypical network, which extends the vanilla prototypical network to the multi-label regime and employs a lightweight VGGish [44] backbone. The label-combinational prototypical network shows strong performance on world-music-tagging benchmarks [45, 46, 47, 48, 49]. Tam et al. [50] employ a Siamese network structure [17] consisting of a single 2D CNN with three convolution blocks (kernel $3 \times 3$ , ReLU, BN, MaxPool). Prediction is based on the cosine similarity between the query set and the support set in latent space. The method shows good performance on a private hand-gesture EMG dataset. Rahimian et al. [51] develop a framework called FS-HGR, which treats the $N$ -way $K$ -shot task as a sequence-to-sequence meta-learning problem. Each episode sequence comprises the $K$ labeled support samples and several query samples; the network is expected to predict the labels of the query samples after being provided with the labeled support set, like SNAIL [52]. The network involves several Temporal Convolutional Network layers, several multi-head attention layers, and a fully connected layer. FS-HGR demonstrates moderate performance on Ninapro Database 2 [53]. Zhang et al. [54] try MAML with two different backbones, a 1D CNN and an LSTM. They train MAML [1] on high-frequency diseases, e.g. Amnesia, Parkinson’s, and Dementia, and evaluate it on Alzheimer’s, a rare disease. Hu et al. [55] propose Task-Sequencing Meta Learning (TSML), where episodes are ranked from easy to hard to enable the network to learn from the easier ones first. A k-means algorithm clusters the support set in each episode. Each episode is ranked based on the accuracy of samples in the support set when k-means converges. The model performs well on the private Farop dataset. Wang et al. [56] incorporate self supervised learning with a Siamese network. They select small proportions of the dataset, the CWRU bearing dataset [57] and the High-Speed-Train (HST) wheel-set bearing dataset [58], for self supervised learning. The self supervised task involves classifying six transformations (original, time-reverse, randomincrease, random-reduction, set-zero, add-noise). Once the self supervised task is complete, the pretrained weights are transferred to initialize the backbone of the Siamese network. The results showed a good performance. Table 1 summarizes the related work. It can be observed that most of the related works focus on biomedical domain, and there are few working in industrial times series. Furthermore, most of the related work is based on simple architectures or their variants. For FSL methodologies, metrics-based learning (Prototypical network, Matching network, Siamese network) are mostly preferred while few prefer optimization-based (MAML, Reptile) and model-based (SNAIL). Most of domain datasets are structured in multi-class except Papaioannou et al. [43] adapt FSL methodology into multi-label. Table 1: Related work summarization. Abbreviations used in this table: EEG (Electroencephalography), EMG (Electromyography), EHR (Electronic Health Record), TCN (Temporal Convolutional Network). # 3 Use Case Description and Data Understanding In this chapter, we describe the data source, including details about how the raw data was collected and the nature of the information it contains. Next, we proceed to data understanding, examining the dataset from a data science perspective. We explore the data thoroughly, aiming to identify key characteristics, uncover potential insights, and recognize inherent limitations or challenges. Finally, we discuss how this raw dataset can support our project’s specific research objectives. # 3.1 Data Source The M6 screw-fastening time series is collected by Songsik [4] and further reconstructed by Paulus [65] at Friedrich Alexander University Erlangen. The screw-fastening data was collected using a DEPRAG screwdriver (model 320EWT27-0035-F6 from the MINIMAT-EC series [66]) featuring an interchangeable internal hexagon screwdriver bit. Fastening was initiated by activating a lever and concluded upon lever release or reaching a predefined torque limit. The status of each fastening was indicated via an LED display: green represented ”in order (i.O.)”, and red indicated ”not in order (n.i.O.)” [4]. Torque data was precisely recorded through motor current measurements. The process was controlled by a DEPRAG AST11-1/-S system, operated via a web-based PC interface. The main tightening method was torque-controlled (”Verschrauben/L¨osen auf Drehmoment”), which is common in industrial applications. Before every tightening, the operator install either a fault-free setup or a specific fault (or fault combination), so the true class was known. That class ID (e.g. class24) was written into the CSV filename and later parsed as the label [4]. Figure 5 shows the collection overview. Figure 5: Overview of data collection [4] # 3.2 Data Understanding As discussed in the data source section, the dataset was collected using a high-precision screwdriver. During each screw-driving session, the screwdriver periodically recorded torque values and other relevant physical quantities, which results in a time series for each sample. In total, 2,300 samples were obtained, each of which captures the dynamics of the screwing process over time. Each sample is structured as a multivariate time series with six fields, as outlined in Table 2. The fields include time, rotational speed, torque, angle, program step, and current. Table 2: Raw data structure [4] The collected dataset spans 16 different categories, including seven uni-factorial defect types, seven multi-factorial defect types and one normal type, as shown in Table 3. The class labeled $" 0$ in order” represents the standard and defect-free samples. Table 3: Collected sample overview for each class [4] To better understand the torque behavior across these different classes, we visualized each class’s average torque value over time in Figure 7 and Figure 6. In this figure, the x-axis represents time in seconds, while the y-axis shows torque in Newton meters $( \mathsf { N } { \cdot } \mathsf { m } )$ . This visualization highlights the distinction between each class. Figure 6: Data overview averaged by each single-labeled samples. The y-axis shows the torque value in Newton meters, and the $x$ axis is the time in seconds. Figure 7: Data overview averaged by each multi-labeled samples. The y-axis shows the torque value in Newton meters, and the $x$ axis is the time in seconds. # 3.3 Use Case Description Our use case addresses real-world manufacturing scenarios involving very limited screwfastening data. We assume that, over time, entirely new error types will appear, the system has never seen before, and that only very few or none of these unforeseen errors will be obtainable. In a multi-class setting, the FSL models can only recognize new error types if those types appear in the support set. In a multi-label setting, however, the model can detect combinations of known defects even when those exact combinations aren’t in the support set. For example, if the support set contains only the uni-factorial defects $A$ and $B$ , a multi-label FSL model can still classify the multi-factorial defect ${ } ^ { " } A , B ^ { " }$ without ever having seen that specific combination in the support. In the multi-class setting, FSL applies when new error types are available and screw-fastening multi-labels are converted into distinct classes. If no new error types can be obtained and errors appear only as specific combinations of known defect types, multi-label FSL can be used. Furthermore, FSL libraries [67, 68] are label size agnostic, as these implementations can remap the label when sampling episodes. For instance, if we have one episode with ground truth labels $\{ 7 , 8 , 9 \}$ , the implementations will remap these labels to $\{ 0 , 1 , 2 \}$ by remapping 7 to 0, 8 to $1$ and 9 to 2. This key property ensures no matter how the screw-fastening error types are enriched in the future, the FSL models don’t need output dimension change and still fit. Note that these implementations are not working with multi-label dataset. Our adaptions are proposed in section 4.3. # 4 Data Preprocessing and Sampling This chapter details the methodologies employed to preprocess and structure the dataset used in this research. We first present the specific preprocessing steps to refine and format the raw data. Following preprocessing, we describe the approach for data separation and sampling. This includes the strategy for partitioning the dataset into training, validation, and test subsets. # 4.1 Data Preprocessing According to Wolfgang et al. [69], “torque is the most widely used and practically measurable process variable in industrial screw driving processes.” Moreover, the predecessors [4, 65] also extracted torque only when preprocessing M6 dataset. Based on these, we focus exclusively on the torque signal. We extract the torque column from each sample, reset any invalid values (i.e., torque values less than zero) to zero (the same as Songsik [4]), and apply min-max normalization using the formula: $$ x ^ { ' } = \frac { x - x _ { \operatorname* { m i n } } } { x _ { \operatorname* { m a x } } - x _ { \operatorname* { m i n } } } $$ # Where: $x$ is the original torque value, $x _ { \operatorname* { m i n } }$ is the minimum torque value in the sample, $x _ { \mathrm { m a x } }$ is the maximum torque value in the sample. Songsik [4] downsamples each series to 920 points, a target we likewise adopt. Both Songsik [4] and Paulus [65] use a downsampling rate of 20, which we also apply. After downsampling, series exceeding 920 points are truncated. Shorter series are zero-padded to 920 points. Figure 8 illustrates the preprocessing pipeline. # 4.2 Data Separation Following the machine learning common sense [23], the dataset used in this study was systematically divided into three distinct subsets:training, validation, and testing. We ensured that the label distributions in each subset are mutually exclusive in FSL. For instance, label (2,7) and label (1,2) are not mutually exclusive since there’s a label overlap on 2. We did the combinatorial assignments over the data labels, trying to place these labels into three slots (training, validation and test) and ensuring no label overlapping. However, no ideal solution exist. Because there’re compound labels (357), (267), (16) and (24), it implies any labels involving 1,2,3,4,5,6 and 7 must be placed in one set otherwise it would break the non-overlapping rule. For instance, if we place (24) in one set, then any label involving 2 and 4 should be placed in the set. Since 2 is included, any label involving 2 should be included, for example (267). The logic propagates. ⋯ Eventually, everything except 0 must be be one set. If we allow some labels to be abandoned, there exists a satisfiable solution, as shown in Table 4. Specifically, classes such as (2,7) and (2,6,7) are excluded. For sample count in training, validation and test set, we keep its ratio to 50:25:25. Normally, it’s suggested to be 75:15:15 by mainstream. If we force it to be 75:15:15, we could remove some labels in validation and test set. However, if we remove one label from Table 4, there would be such two cases in validation/test set: One multi-label and one single-label in validation/test set Two single-labels left in validation/test set For instance, for validation set, if we remove (24) and leave 2 and 4, we lose the valuable multilabel samples which are the key in our study. If we remove 2 and keep 4 and (24), in multi-label setting (assuming the network outputs by sigmoid), we would lose the opportunity classifying 4 since its ground truth on label 4 is always on. And the same applies when removing label 4. Moreover, removing more than one labels doesn’t introduce any meaningful setting since it leads feasible N-way to be 1. Therefore, our proposed solution (Table 4) is acceptable. Figure 8: Simplified flowchart of preprocessing screw-fastening raw data # 4.3 Episodic Sampling Our data is multi-labeled and existing FSL episodic sampling is based on multi-class. Therefore, we adapt the multi-label to multi-class. One simplest solution would be assigning every combinational label a unique class ID [70]. We consider the multi-label (represented as binary vectors) as bits, and its integer representation is its unique class ID. Formally, given a multi-label binary vector $V = \left( v _ { 1 } , \cdots , v _ { n } \right) \in \left\{ 0 , 1 \right\} ^ { N }$ , its unique class ID $c$ is: $$ c = \sum _ { i = 1 } ^ { N } 2 ^ { i - 1 } \cdot v _ { i } $$ The multi-class episodic sampling is always cooperated with multi-class few-shot classification. It might not work well with multi-label few-shot classification. The reason is that it can cause nondeterministic output dimensions, especially when the network relies on a sigmoid layer for multi-label classification. For instance, suppose we have three labels (0,1,2) in our dataset and use N-way $^ { \prime } { = } 2$ , unique labels in the first episode can be $\{ 0 , ( 1 , 2 ) \}$ , and unique labels in the second episode can be $\{ 0 , 1 \}$ . The output dimensions of first episode should be 3 since there are 3 different labels, and the output dimensions of the second episode is supposed to be 2. we could solve this using the full label representation, always outputting number of labels in the network. However, it loses the flexibility, when the screw-fastening label dimensions changes (e.g. adding a new error type), the model need retraining in this case. To make network (using sigmoid for final output) flexible for multi-label, we propose a labelbased episode sampling. It’s based on the idea that multi-label can be thought as several multi-classes simultaneously. For instance, a sample labeled 357 will collapsed into three copies labeled 3, labeled 5 and labeled 7, respectively. For episode sampling, we begin by collapsing all the multi-labels into multi-classes. Afterwards N-way single-labels are selected. Then support set and query set are selected in the same as multi-class episode sampling (chapter 2.1.2). Note that the ground truth labels are not simply remapped when generating support set and query set but adapted in some way. Adaption removes unsampled labels and remaps the labels eventually. For instance, if the sampled labels are $\{ 1 , 3 , 7 \}$ and three samples selected in support set has ground truth labels $\{ ( 1 ) , ( 1 , 3 , 9 ) , ( 1 , 2 ) \}$ , b}efore remapping, the three labels will be transformed to $\{ ( 1 ) , ( 1 , 3 ) , ( 1 ) \}$ . This multi-label episode sampling ensures the network output dimension is N-way. However, in this way, duplicate samples could appear in the support set sometimes. Table 4: Data separation overview # 5 Model Design This chapter outlines the methodologies employed in the design of the FSL models utilized in this project. First, the Prototypical network [16] approach is discussed. Subsequently, MAML [1] approach is presented. Finally, the backbone architectures are described in detail. Figure 9 shows the one episode (support set and query set) is inputted into multi-label/multiclass MAML/Prototypical network. Before episode is fed to Multi-class and Multi-class Prototypical network, all multi-labels in episodes are converted to integers (Formula 4.2). Before episode is fed to Multi-label Prototypical network, the activated label space will be calculated, which will be described afterwards. Both multi-label and multi-class MAML need adapting support set and updating gradient. Both multi-class and multi-label Prototypical network require centroid computation. MAMLs reply on logits when classifying query set and Prototypical networks depend on logits on negative distance between query and each centroid. The networks can use logits to update or backpropagate. Figure 9: Few-Shot learning methodology pipeline overview. Every node is a procedure except the Support Set & Query set. # 5.1 Prototypical Network Approach # 5.1.1 Prototypical Network Approach for Multi-Class Setting We encode our multi-label data into multi-class using Formula 4.2. In this case, every label $y _ { j }$ will be an integer. Multi-Class Centroid Computation For multi-class, we use the same way to compute centroids from Snell et al. [16]. In each training episode $\tau _ { i }$ , data is provided as a tuple consisting of a support set $S _ { \tau _ { i } }$ and a query set $Q _ { \tau _ { i } }$ . Initially, the samples within the support set are fed into the network backbone $f$ to obtain their respective embeddings. Utilizing the provided labels, we compute the prototypes $C _ { l }$ by averaging the embeddings corresponding to each class $l$ as [16]: $$ C _ { l } = \frac { 1 } { N _ { l } } \sum _ { ( x , y ) \in S _ { \tau _ { i } } } f ( x ; \theta ) \mathrm { i f } y = l $$ where $N _ { l }$ is number of samples whose label $\mathbf { \epsilon } = l$ in $S _ { \tau _ { i } }$ . Query Set Prediction & Loss Function Subsequently, the query set passes through the backbone to generate embeddings. We then measure the distance between each query embedding and each class prototype. The negative distance to each centroid will be fed into Softmax to make logits. Finally, the cross-entropy loss between logits and true labels is computed as [16]: $$ \mathcal { L } ( \boldsymbol { \theta } ) = \frac { 1 } { | Q _ { \tau _ { i } } | } \sum _ { ( \boldsymbol { x } , \boldsymbol { y } ) \in Q _ { \tau _ { i } } } L _ { c e } ( l o g i t s , \boldsymbol { y } ) $$ where $$ l o g i t = S o f t m a x ( - d ( f ( x ; \theta ) , C ) ) $$ $d$ is the distance function (e.g. Euclidean distance) and $C$ is the collection where each entry is a class centroid. # 5.1.2 Prototypical Network Approach for Multi-Label Setting For scenarios involving multiple labels, we change the way of computing centroids and adapt the LC-Protonets [43] framework. Activated Label Space For a sample labeled as (A,B), it exhibits the characteristics of class A, class B and class (AB). Therefore, we say label (A,B) has the label space $\{ A , B , ( A , B ) \}$ . The label space of a multi-label is the power set [71] of each individual compo{nent, e(xcludi)n}g the empty set [43]. For each episode $\tau _ { i }$ with a support set $S _ { \tau _ { i } }$ and a query set $Q _ { \tau _ { i } }$ , we first compute the activated label space $A _ { i }$ , excluding the empty set: $$ A _ { i } = \bigcup _ { ( x , y ) \in S _ { \tau _ { i } } } \mathcal { P } ( s u p p ( y ) ) \backslash \emptyset $$ where $$ s u p p ( [ l _ { 1 } , l _ { 2 } , \ldots , l _ { m } ] ) = \{ i \mid l _ { i } \neq 0 \} $$ supp denotes activated labels, $\mathcal { P }$ denotes power set [71], and $y$ is a multi-label binary vector. Multi-Label Centroid Computation We apply the same methodology from LC-protonet [43] to compute centriods. For a label $l \in A _ { i }$ , to compute its centroid $C _ { l }$ , samples contribute to the centroid as long as their labels include $l$ [43]. For instance, one sample labeled as (A, B) contributes to class A’s centroid. For a label $l \in A _ { i }$ , its centroid is computed as follows: $$ C _ { l } = \frac { 1 } { N _ { l } } \sum _ { ( x , y ) \in S _ { \tau _ { i } } } f ( x ; \theta ) \mathrm { ~ i f ~ } l \in \mathcal { P } ( s u p p ( y ) ) $$ where $N _ { l }$ is the number of samples whose label is covered by $l$ . After computing the centroids, we process the query set in a manner similar to the standard Prototypical network used for multi-class classification. We compare the distance between the query samples and centroids and make predictions based on this. However, since we are dealing with multi-label classification, we decode the prediction into multi-label with the help of label space $A _ { i }$ . The complete pipeline is illustrated in Figure 10. Figure 10: The Prototypical Network pipeline for multi-label screw-fastening data, where there are centroids for the activated labels in the representation space. Here, we apply softmax on negative distance from query to each prototype. For simplicity, here we use real label instead of one-hot encoding representation. # 5.2 Model Agnostic Meta-Learning Approach We begin with a single model called the meta-model, denoted by $\Theta$ . For each training episode $\tau _ { i }$ , consisting of a support set $S _ { \tau _ { i } }$ and a query set $Q _ { \tau _ { i } }$ , we create a cloned instance of the meta-model, which we refer to as the learning model, denoted by $\theta$ . Support Set Adaption & Learning Model Update During each episode, the support set is fed through the learning model, which includes a backbone and additional dense layers. The dense layer output then passes through an activation function—typically softmax for multi-class classification or sigmoid for multi-label settings. A loss is computed using the cross-entropy loss function. This loss is used to update the learning model parameters $\theta$ via a gradient descent step [1]: $$ \theta _ { i } = \Theta - \sum _ { ( x , y ) \in S _ { \tau _ { i } } } \alpha \nabla _ { \Theta } L ( f ( x ; \Theta ) , y ) $$ where $L$ is the cross entropy loss and $\alpha$ is the learning rate. Query Set Adaption $\alpha$ Meta Model Update Next, the query set $Q _ { \tau _ { i } }$ is fed into the updated learning model $\theta _ { i }$ , and we check the output $f _ { \theta _ { i } }$ and compute the loss. This loss is then backpropagated through the learning model to update the meta-model $\Theta$ . Since the learning model is derived from the meta-model, this process introduces second-order gradients (i.e., Hessian gradient) w.r.t $\Theta$ [1]. In episode $\tau _ { i }$ , the meta-model is updated using [1]: $$ \Theta = \Theta - \beta \nabla _ { \Theta } \sum _ { ( x , y ) \in Q _ { \tau _ { i } } } L ( f ( x ; \theta _ { i } ) , y ) $$ where $\beta$ is the learning rate for meta model. Figure 11 provides an overview diagram containing steps in MAML during one training episode. Furthermore, during learning model adaptation, we allow it to adapt multiple times for one episode. Algorithm 1 shows the pseudo-code of how we train MAML within the episode. During inference, the model still follows the similar pipeline. The learning model still be copied from the meta model. It will still adapts the support set. However, when it finishes adaption, it will classify the query set. # 5.3 Backbone The following section introduces the backbones we used in this project and discusses what we adapted for them. The encoders we used are 1D-CNN, InceptionTime, and Moment. # 5.3.1 1D CNN Our 1D-CNN encoder adopts a standard architecture [34] consisting of N convolutional blocks followed by M dense blocks. Each convolutional block includes a convolutional layer, a pooling layer, batch normalization, and dropout. The output from the final convolutional block is Figure 11: MAML training pipeline overview. In one episode, the learning model is first initialized from meta model. Then support set is fed to the learning model. The loss generated from this is backpropagated to learning model itself. Afterwards the query set is fed to learning model, the loss generated is backpropgated to meta model. 1.X indicates the steps during support set adaption. 2.X denotes the steps handling query set. # Algorithm 1 MAML training procedure (adapted from [1]) Require: adaption step M, initial parameter $\Theta$ Require: Training Episodes 1...N. Require: support set $S _ { \tau _ { i } }$ and query set $Q _ { \tau _ { i } }$ for each episode $\tau _ { i }$ Require: $\alpha$ and $\beta$ : learning rate and meta-learning rate 1: for $i = 1$ to $N$ do ▷ For each training episode 2: $\theta _ { i } = \Theta$ 3: for $z = 1$ to $M$ do ▷ Adapt support set for $M$ times 4: $\begin{array} { l } { \mathcal { L } = \sum _ { \left( x , y \right) \in S _ { \tau _ { i } } } L ( f ( x ; \theta _ { i } ) , y ) } \\ { \theta _ { i } = \theta _ { i } - \alpha \nabla _ { \theta _ { i } } \mathcal { L } } \end{array}$ ▷ Loss on support set 5: ▷ Learning model updates 6: end for 7: $\begin{array} { l } { \mathcal { L } = \sum _ { ( x , y ) \in Q _ { \tau _ { i } } } L ( f ( x ; \theta _ { i } ) , y ) } \\ { \Theta = \Theta - \beta \nabla _ { \Theta } \mathcal { L } } \end{array}$ ▷ Performance on query set and gather loss 8: ▷ update meta model 9: end for flattened and passed through the dense blocks composed of fully connected layers. The output of the last dense layer is then fed into a final linear layer that projects it to the desired representation dimension. Figure 12 illustrates the overall architecture. # 5.3.2 InceptionTime We do not introduce significant modifications to the original InceptionTime architecture [2]. In our implementation, the network comprises $d$ sequences Inception modules. For each Inception module, we set the three kernel sizes to 39, 19, and 9, respectively. This differs from the original configuration proposed in Fawaz et al. [2], which used even-sized kernels such as 40. Our choice to use odd kernel sizes was made to preserve symmetry in convolution operations [72]. The detailed structure of an Inception module is illustrated in Figure 13. Following the Inception modules, the output is passed through dense layers, which follows the same configuration depicted in Figure 12. Figure 12: 1D CNN architecture used in this project Figure 13: Inception module used in this project. # 5.3.3 Moment The Moment has several variants, Moment Small, Moment Base and Moment Large. We use Moment Large given that it has highest performance in open public benchmarks [3]. Moment Large contains approximately 341M parameters and consists of 24 T5 encoder blocks [3]. A T5 encoder block [5] is a variation of transformer encoder [73]. In Moment Large, one T5 encoder block uses d model of dimension 1024 and feedforward layers of dimension 4096, with a 16 attention heads [74]. It means that all the q layers, v layers and k layers are $\mathrm { 1 0 2 4 \times 1 0 2 4 }$ linear layers. The architecture can be found in Figure 14. Figure 14: Moment (large) architecture for representation learning [3]. The main architecture consist of patchification, patch embedder and a sequence of T5 Encoder blocks. It takes a 512 lengthed time series and output a representation vector of 1024 dimensions in the end. One T5 encoder [5] is a variant of transformer encoder. The embedder is simply one linear layer that maps from patch diemsnion to 1024. For Moment, since it’s pretrained, we can directly freeze it for faster computation speed and a balanced performance. Another option is finetuning Moment. The option full finetuning is not considered because of its large parameter size and scarcity of data in FSL. The option finetuning the last dense layer might be considered. However, in Moment encoder the structure is not designed with a pretrained final dense layer. It might inspire us to consider finetune the last T5 encoder. Nevertheless, one T5 still contains $3 M +$ parameters and is still large. Therefore finetuning the last T5 is not planned. A partial finetuning inside the last T5 might be proposed but we didn’t find too much work regarding finetuning partially inside one transformer encoder. We only finetune via Low-Rank Adaptation of Large Language Models (Lora) [75]. Lora approximates dense-weight matrices using low-rank decomposition. By multiplying a lowrank vector by its transpose, one matrix can be obtained, which can be further as gradient deviation for a linear layer update [75]. Since the Moment only takes a time series of length 512 [3], we add an additional downsampling module for prepreprocessed data of length 920. It downsamples at rate of 2 and afterwards pads to 512. Another method to feed our preprocessed data is to truncate, removing any information after time point 512. However, this might reduce the information as some spatial information hides in the latter part. One promising solution is chunking. In this case, we can chunk our 920 lengthed time series into two 512-step chunks. The first chunk is first 512 time step data and, likewise the second chunk take the remaining and is padded to length 512. Feeding these chunks in parallel into Moment will obtain two representation vectors. The first representation vector encodes self attention information of the first chunk. Also the second representation vector encode the second chunk. We might need attention between the first chunk and second chunk, therefore requiring the full representation combining the first and second chunk. However, in order to have the full representation of the whole series, we might need to fuse two representation vectors, which may require apply some networks. Moreover, we are not sure which architecture to utilize for fusion, and it might bring more computation time. Therefore, by Occam’s razor, we decided to discard this. The Moment used in this report have four variants: Frozen Moment. Exactly the same in Figure 14 with all pretrained weights frozen, including embedder and all the T5 encoders. Frozen Moment $^ +$ linear. The frozen Moment is appended with several tunable linear layers. Lora Moment. The Moment is finetuned with Lora for all transformer modules in all T5 encoders. ■ Lora Moment $^ +$ linear layer. The Lora Moment is appended with several dense layers. The dense layer follows the same architecture of that in Figure 12. # 5.4 Specification As mentioned before, FSL methodologies provide framework for FSL pipeline how to adapt the support set and classify the query set. All the FSL methodologies in our project require a backbone to represent the input time series. For different FSL methodologies and backbones, we might need some specifications. In this section, we discuss these FSL methodologies and backbones’ specificity. Prototypical Network For Prototypical network, it’s metric-based FSL methodology, therefore, it’s dependent on backbone. The whole pipeline keeps the same no matter what backbone is uses. In this report, we equip Prototypical network with 1D CNN, InceptionTime and all Moment variants. When used with frozen Moment, during backbropagation, the gradients will stop on the last layer of the frozen Moment. MAML Unlike Prototypical network, MAML replies on tunable parts. The backbone must have some tunable sections. Therefore, the frozen Moment can’t be adopted by MAML. Moreover, MAML in our project uses second-order gradient (Hessian gradient), requiring the number of parameters to be small for computation speed. If we use Moment Lora and finetune $1 \%$ of total parameters, we still have around 4M parameters and memory complexity is $\mathcal { O } ( n ^ { 2 } )$ for Hessian gradient. Therefore, we decide our MAML not to use Moment Lora and Mome(nt L)ora $^ +$ linear. The only option left to be backbone is frozen Moment $^ +$ linear. For lightweight backbones, 1D CNN and InceptionTime, MAML is free to use. Backbones During training, we allow some hyperparameters (e.g. epoch) determined by backbones. For lightweight backbones, 1D CNN and IncpetionTime, we set it a large one 200. For Moment, we received some inspiration a Moment tutorial [76], where it sets 1 epoch on performing a forecasting task on ETDataset [77]. However, the ETDataset has 30,000 samples, each with 7 features. We didn’t find the proportion of training set in the tutorial. We assume the training set of ETDataset is $70 \%$ . The number of samples it iterate over 1 epoch is 21,000. Our training set contains 1100 samples. If we iterate all training samples in one epoch, the expected epoch is roughly 20, which we set in the experiments. # 6 Experimental Results and Discussion We conduct experiments for two FSL methodologies, Prototypical network, and MAML, with three types of backbones: 1D CNN, InceptionTime, and Moment. Moreover, we have two ways to sample a few shot learning data: by class or by label. We will conduct experiments using different methodologies, backbones, and data sampling configurations in combination. We follow the regular train-validation-test machine learning pipeline [78]. But instead of finding the best checkpoint, we are more interested in robustness of a model. Therefore, we wish to find the best hyperparameter and evaluate it, as show in Figure 15. Figure 15: Experiment pipeline. Initially we sample HPs and use this hyperparameter to obtain a trained model. The trained model is then recorded regarding its performance on validation set. In the end, the best HP leading to best performance on validation set is selected. The it comes to test evaluation. The best HP initalize final model. The final model is trained on training set and evaluated on test set. It performance on test set will be recorded. This test evaluation is conducted for several times. The hyperparameter sampling method is the Bayesian based sampling method from Optuna [6]. Abbreviations in this figure: HP (Hyperparameter). # 6.1 Hyperparameter Setting This section discusses the hyperparameters that each model and methodology owns. It provides details regarding the hyperparameter’s meaning and range. # 6.1.1 Basic Hyperparameter Every FSL methodology has a hyperparameter for the learning rate, weight decay, optimizer and early stop patience. Its detail can be found in Table 5. Table 5: Basic Hyperparameter. If the a hyperparameter is fixed, it’s not considered for hyperparameter optimization. # 6.1.2 MAML Hyperparameter For different methodologies, we have different specific hyperparameters. For MAML, we have two hyperparameters, meta-learning rate, and adaption steps. Meta-learning rate species the rating rate for gradient update for meta-model. Adaption steps control how many iterations the learning model learns the support set. Table 6 shows the hyperparameter range. # 6.1.3 Prototypical network Hyperparameter We have two hyperparameters for the Prototypical network: normalization and distance function. Normalization is a boolean choice whether we apply L2-normalization or not. We use The distance function to calculate the distance between two representation vectors. Table 6 shows the hyperparameter range. Table 6: Hyperparameters and their ranges for different FSL methodologies. # 6.1.4 Backbone-Specific Hyperparameter For 1D CNN, several critical hyperparameters are the number of convolutional layers, number of channels, kernel size, pooling function, pooling kernel size, and convolution activation function. For InceptionTime, the number of inception modules, the number of filters, and the output dimension are critical hyperparameters. For Moment, if it is not frozen, we have Lora rank, Lora alpha, and target module as hyperparameters. Since Moment is a pretrained foundation model, we use a tiny epoch, 20. For other lightweight backbones, we fix the epoch to 200. Table 7 shows the each hyperparameter details. The full hyperparameters for each backbone can be found in Appendix A. Table 7: Backbone-Specific critical hyperparameters. The epoch is fixed for all backbones and not taken account in optimization. # 6.2 Experiment Setting Our experiments are conducted on a machine running Debian GNU/Linux 12.0. The system is equipped with an NVIDIA GeForce RTX 2080 GPU featuring 8 GB of VRAM, which is sufficient for FSL data scarity cases. All code is written in Python 3.10 and executed with CUDA 11.8 to enable GPU acceleration. To realize episodic sampling, we make use of the Learn2Learn library [67]. To simulate scarcity of data, we set $K - s h o t = 1 0$ . If we use class-based sampling (Chapter 4.3) with labels encoded into multi-class, we set N-way $^ { - 3 }$ . If we use label-based sampling (Chapter 4.3), since we only have two distinct labels in validation set and test set, for the maximum N-way, we can only set it 2. Since the number of samples for every categories $\geq 1 0 0$ , deducting the K-shot for the support set, the would remain at least 90 samples for each category. we could set M-query $_ { = 9 0 }$ , but considering of possibility of potential bugs in the code, we set M-query $\scriptstyle = 5 0$ , a stable number instead. For the number of episodes, we set number of episodes in training to 60, number of episodes in validation to 30 for faster computation and number of episodes in test to 100 for accurate results. # 6.3 Experiment Results We use optuna [6] to find the best hyperparameter found from validation set and trained on training set. We use multiple objectives in Optuna, precision, recall and F1 score. The best hyparameter is selected by F1 score. With the best hyperparameter, we conduct training multiple times and record its performance on the test set. For lightweight models, i.e., 1D CNN and InceptionTime, we conduct it up to 50 times. For Moment, we only conducted training 15 times due to its large number of parameters which takes long. The evaluation results can be found in Table 8, using the weighted F1 score, considering the class imbalance. Moreover, for each model, we find its best methodology, as shown in Table 9. Furthermore we calculated metrics for each class metrics the best (model,methodology) behaves in the test set, as shown in Table 10. In terms of time, hyperparameter search of 100 trials and evaluation experiments on lightweight models can be taken within one day. However, for Moment, it takes longer. The hyperparameter search on Moment Lora (without/with linear) for 100 trials takes around four days. Moreover, its evaluation can be finished within two days. For frozen Moment (without/with linear), the hyperparameter search can be finished within two day. The evaluation can be done within one day. # 6.4 Summary and Discussion of Experimental Results Based on the evaluation results in Table 8 and the class-wise performance comparison in Table 10, we have observed that InceptionTime consistently achieves the highest weighted F1 scores in both multi-class and multi-label settings under the Prototypical network methodology. Specifically, it achieves an F1 score of 0.944 in multi-class and 0.935 in multi-label classification. While Moment is expected to perform best, in most cases, it fail to underperform 1D CNN. Extra downsampling for Moment (mention in Chapter 5.3.3) can be the reason. Another reason can be our insufficient data. From a use case [76], where they finetune Moment on ETDataset [77] consisting of around 30,000 samples with 7 features for one epoch. Though, we have only have 1100 samples in our training set, for which we increase the epoch to 20. This might lead to overfitting. Furthermore, the frozen Moment encoder (direct evaluation) shows the lowest performance. However, applying Lora adaptation and adding a linear classifier significantly improves Moment’s performance. Across all backbones, the Prototypical network generally outperforms MAML. For example, InceptionTime achieves 0.944 under the Prototypical network compared to 0.912 under MAML. This hits our impression that Prototypical network generally outperforms MAML, as the open benchmark [79] shows the same tendency. From Table 10, InceptionTime and 1D CNN provide a balanced trade-off between precision and recall. While the fine-tuned Moment model achieves the highest precision for individual classes, its relatively lower recall results in a slightly reduced F1 score. InceptionTime offers the best overall balance, maintaining high scores across all metrics and class groupings. InceptionTime with Prototypical network emerges as the most effective and robust combination for FSL. It outperforms the extensive pretrained backbones. Table 8: Model F1 Scores across FSL methodologies. The best hyperparameter is chosen to evaluate on the test set. The lightweight models are evaluated 50 times. For Moment, we evaluated it 15 times. The evaluation uses sklearn [7] weighted F1 score. The model with the highest mean in each methodology is marked bold. The F1 column is in the format mean ± standard deviation. Table 9: The best FSL methodology for each backbone, where we select by mean of weighted F1 score on test set and Moment denote the Lora Moment $^ +$ linear. Table 10: Performance comparison across different classifiers and classes in the test set measured in weighted precision, weighted recall andweighted F1.The F1 column is in the format mean ± standard deviation.
Few-shot learning (FSL) has shown promise in vision but remains largely unexplored for \emph{industrial} time-series data, where annotating every new defect is prohibitively expensive. We present a systematic FSL study on screw-fastening process monitoring, using a 2\,300-sample multivariate torque dataset that covers 16 uni- and multi-factorial defect types. Beyond benchmarking, we introduce a \textbf{label-aware episodic sampler} that collapses multi-label sequences into multiple single-label tasks, keeping the output dimensionality fixed while preserving combinatorial label information. Two FSL paradigms are investigated: the metric-based \emph{Prototypical Network} and the gradient-based \emph{Model-Agnostic Meta-Learning} (MAML), each paired with three backbones: 1D CNN, InceptionTime and the 341 M-parameter transformer \emph{Moment}. On 10-shot, 3-way evaluation, the InceptionTime + Prototypical Network combination achieves a \textbf{0.944 weighted F1} in the multi-class regime and \textbf{0.935} in the multi-label regime, outperforming finetuned Moment by up to 5.3\% while requiring two orders of magnitude fewer parameters and training time. Across all backbones, metric learning consistently surpasses MAML, and our label-aware sampling yields an additional 1.7\% F1 over traditional class-based sampling. These findings challenge the assumption that large foundation models are always superior: when data are scarce, lightweight CNN architectures augmented with simple metric learning not only converge faster but also generalize better. We release code, data splits and pre-trained weights to foster reproducible research and to catalyze the adoption of FSL in high-value manufacturing inspection.
[ "cs.LG", "cs.AI" ]
# 1. Introduction Privacy relevant laws that have come into effect in the last 6 years include predominantly the EU General Data Protection Regulation (GDPR) [1] and the California Consumer Privacy Act of 2018 (CCPA) [2]. California Privacy Rights Act (CPRA) [3] is a recent amendment to CCPA, whereas other countries are also following the rationale of introducing similar privacy laws, such as the Data Protection Act of UK (UK DPA) [4] that is the UK’s implementation of the GDPR, the Brazilian General Personal Data Protection Law [5] and the New Zealand’s Privacy Act [6]. A vast amount of software repositories are available on GitHub and they are being reused by developers worldwide. Since software may also contain mechanisms for users’ data collection, the community is obliged to adhere to relevant legislation in order to protect users’ privacy [7]. Overall, a positive impact of the privacy laws on practitioners’ behaviours and organizations’ cultures has been reported [8]. Previous works have examined developers’ concerns on privacy in specific domains, e.g. analyzing Reddit discussions on mobile application development [9, 10], Q&A sites on privacy [11] or pull requests [12], but have not investigated the repositories’ evolution when it comes to commits. However, the question of how privacy laws have affected software repositories so that they act on updating their source code is a useful area of research to allow us to understand whether repositories have performed commits for data privacy laws, and how long it took to make appropriate changes, so that tendencies can be found and areas of improvement can be identified. To guide our research in an attempt to answer the above basic question, we structure our work around the following research questions (RQs): • RQ1. Which mentions to main data privacy laws do commits make and how has this evolved over time? We have analyzed the number of commits and the changed Lines of Code (LOC) in commits that make reference to main data privacy laws, observing the year the change was made, as well as how long it took to complete these relevant commits. GDPR, CCPA, CPRA and UK DPA are the laws considered in this and the next RQs. • RQ2. Which type of repositories and which programming languages are more active in relevant development activities on data privacy legislation? In this RQ, our intent was to find how many commits are performed in repositories and how many LOC are usually changed. The effect of the owner type (User or Organization) and of the main programming language of the repository on the commits volume is also examined. • RQ3. What are the main terms appearing together with data privacy laws in commits on GitHub? As there are relevant privacy terms that may be indicated in the commits, we examined which are the most frequent terms appearing in the commit message per examined law and the presence of specific user rights from the legislation. • RQ4. Which is the main purpose of a commit that addresses privacy legislation? The commit may refer to code change or to text change (e.g. updates to privacy policy text). In order to complement the automated analysis that was used in the previous three RQs, the aim of the RQ is to examine some commit messages in more detail. The RQ was answered by manually analyzing the commit messages and relevant code changes from a small subset of the dataset (594 commits from the 70 most popular repositories in the dataset). We collected commits using keywords from recent privacy laws: GDPR, CCPA, CPRA, UK DPA. Via a mainly automated and partially manual process, we analyzed 36,807 commits from 12,391 repositories from 2016 till 2023. To the best of our knowledge, no study on commit activities for privacy legislation on GitHub commits exists and no related study involving commits has utilized such a large number of commits coming from a large number of repositories. The contribution of our work is summarized in the following: i) creation of a huge labeled corpus of commits related to data privacy laws, ii) analysis of references to data privacy laws in commits on GitHub using commits volume and changed LOC, iii) investigation of the characteristics of the relevant commits and repositories (e.g. main terms). An initial version of this line of work has been published with a preliminary analysis on the indication of GDPR in GitHub commits [13]. The remainder of the text is structured as follows. Section 2 presents relevant works in the area. The methodological process is introduced in section 3. Results are presented in section 4 and are further discussed together with implications in section 5. Threats to validity are examined in section 6, and finally, section 7 concludes the work. # 2. Background and related work # 2.1. Privacy laws The landscape of privacy legislation has changed primarily after the introduction of GDPR that came into effect in May 25th, 2018 and superseded the Data Protection Directive 95/46/EC [1]. UK’s DPA is the UK version of GDPR that complements GDPR to suit the needs of UK (with the same effect date as GDPR). CCPA followed in the United States coming into effect on January 1st, 2020, whereas CPRA adds to the existing provisions of CCPA (effect date: January 1st, 2023) [3]. Many other countries have followed the GDPR paradigm introducing recently data privacy laws. According to the United Nation’s Conference on Trade and Development1, 71% of countries worldwide have data protection and privacy legislation in place with a total of 241 laws including older and newer laws within one country. Software needs to comply with the above laws, provided that it is active in the geographical areas the law applies. For instance, the INFORM e-learning platform was implemented considering GDPR requirements [14]. One main area on data collection is that users need to provide their consent on the collecting, usage and processing of their personal data and they are usually informed about these aspects via the privacy policy of the service or application. User rights are fundamental, as they provide users control over data activities. GDPR indicates 8 user rights (also present in UK’s Data Protection Act): the right to information, the right of access, the right to rectification, the right to erasure, the right to restriction of processing, the right to data portability, the right to object, and the right to avoid automated decision-making. CCPA has introduced five main privacy rights for consumers: the right to know, the right to delete, the right to opt-out (of sale), the right to disclosure, and the right to non-discrimination, and CPRA four additional rights: the right to correct (inaccurate personal information), the right to opt-out of automated decision making, the right to data portability and the right to limit use and disclosure of sensitive personal information. UK DPA shares the same rights with GDPR. # 2.2. Commit/issue analysis How Free and Open Source Software is used by large organizations was examined using 1,314 repositories from GitHub [15]. Specific metrics were used for this, including frequency of commits, Lines of Code and comments in source code. The relation of commits’ sentiment in relation to software bugs was investigated, with a main conclusion that commits related to bugs (introducing, preceding or fixing bugs) are more negative than other types of commits [16]. Issue comments on GitHub were analyzed by Khalajzadeh et al. with the aim of identifying human-centric issues and a wide range was encountered, including Privacy & Security [17]. Other works on commits analysis include the Anomalicious tool that assists in detecting potentially malicious commits [18], works that detect unusual commits [19], as well as works that perform commit classification [20, 21, 22] or works that examine other properties (e.g. size of commits, software design degradation) [23, 24, 25]. Labeling issues as questions, bugs, enhancements has been examined using BERT (Bidirectional Encoder Representations from Transformers) [26]. Issues were also studied in earlier works to study the overall adoption of issue trackers, the relevant categories, how they are used by the project’s community and how they relate with the project’s success [27]. # 2.3. Data privacy in Web applications and Open Source Software Topic modelling was applied to 1,733 privacy-related questions on Stack Overflow and a random sample of 315 questions was then qualitatively analyzed by Tahaei et al. [11]. Questions collected included the word ‘privacy’ either in their title or as a tag. Laws and regulations, such as GDPR, were included among the themes in a thematic analysis performed in the qualitative examination to identify the drivers that made the user post a specific question. Regulations outside EU were not found in that sample but in the whole dataset references to other regulations, such as USA’s Health Insurance Portability and Accountability Act (HIPAA), were found. A similar work that employed topic modelling considering also Information Security and Software Engineering Stack Exchange sites (apart from Stack Overflow) was performed by Diepenbrock et al. [28]. Among the topics, the Legal topic covers discussions on compliance with laws, such as GDPR and CCPA. Developers’ discussions on Reddit were examined using a qualitative analysis of a sample of 207 threads mentioning different forms of personal data from the r/androiddev forum on Reddit [9]. The authors relied on the legislation including GDPR and CCPA to extract relevant terms for personal data. Another work extended the previous analysis on Reddit discussions and used word frequency, topic clustering and classification to analyse 437,317 threads from r/webdev, r/androiddev, and r/iOSProgramming [10]. Concerning GDPR and CCPA, it was found that there is a significant change in topics and terms due to GDPR but to a lesser extent due to CCPA. In order to see whether websites are complying with the minimum requirements of CCPA, providing a link to hyperlink on their homepage with the text “Do Not Sell My Personal Information" (DNSMPI) (or right to opt-out of sale), a corpus of web documents was examined [29]. Developers of mobile applications directed to children were asked about the privacy compliance processes they follow, including reference to the developers’ perspectives on the requirements of Children’s Online Privacy Protection Act (COPPA), GDPR and CCPA [30]. It was found that developers put a lot of trust in the enforcement performed in the application markets and as a result there is a need for more usable compliancechecking and auditing tools. Online, various approaches are investigating privacy policies automating their analysis [31] or examining compliance to legislation, such as GDPR [32, 33]. Previous works have also identified the need to investigate how software developers address privacy regulations [34]. Privacy-relevant requirements for software projects were exported in a taxonomy from GDPR, the ISO/IEC 29100 privacy framework, the Thailand Personal Data Protection Act and the AsiaPacific Economic Cooperation (APEC) privacy framework [35]. The issue reports of Chrome and Moodle were also classified in the taxonomy, and some differences between privacy and non-privacy issues were found. A framework was described to examine in the future which issues developers discuss in relevance to privacy legislation [36]. The authors identified the relevant gap in the literature and intend to work on the reporting of issues related to personal data and data protection. In their work, they describe their analysis plan that focuses mainly on issue types and reporter types analysis. Pull requests were analyzed along with the results of a survey with developers to understand the effect of GDPR on open source software development [12, 37]. Main results include that there is more development activity with GDPR-related pull requests in terms of commits, additions, deletions, and files changed, as well as review activity, while no variations were found in the sentiment of pull requests over time. Relation to previous works. No previous work has examined whether GitHub repositories are acting on accommodating the needs of recent and popular data privacy laws via commits analysis, whereas only one work has focused on pull requests [12, 37]. An important advantage of the current work is that it relies on all available commits on GitHub and is not restricted to specific repositories, programming languages or ecosystems. # 3. Methodology steps The phases used in the work are described in the next parts. The whole procedure is depicted in Figure 1. We have used commits as main source for analysis, instead of e.g. issues, for the following reasons. First, some repositories may be using external issue tracking systems (e.g. Jira) or may not be using any issue tracking system, so if we relied on issues we would not have been able to collect information on such repositories. A previous work has reported that issues are almost exclusively reported in large projects with big development teams [27]. Secondly, issue discussions have a more complex structure than commits, and performing the search based on privacy legislation may have included irrelevant content in the dataset or may have excluded relevant data (e.g. there is a mention of GDPR in comments of an issue $^ 2$ but the issue is not relevant to privacy legislation). Analyzing issues is nevertheless, an important area of work on data privacy legislation [36]. # 3.1. Keyword selection from legislation In order to choose the keywords of interest to search for in projects’ commits, we relied on the experience of one of the authors on previous works on Privacy Figure 1: Methodological steps. Enhancing Technologies and GDPR compliance in software systems in the last 12 years [38, 39] and considered the relevant privacy laws [1, 2, 3, 4]. We devised the following set of keywords that needed to be present in the commit: General Data Protection Regulation or its abbreviation GDPR, California Consumer Privacy Act or its abbreviation CCPA, California Privacy Rights Act or its abbreviation CPRA, and finally Data Protection Act. We chose the above laws for the following reasons. They are all recent and popular with software needing to comply when offered within the laws’ jurisdiction [40]. Second, they have gained popularity in research works: a search on GDPR on Google Scholar since 2022 (and till March 2025) returns approximately 20,900 publications, on CCPA 49,400 and on CPRA 3,810 (the last two abbreviations may though be used also for other purposes). Finally, as aforementioned one of the authors has research experience with one of the laws (GDPR). We did not consider laws that are less popular on a global level, e.g. the Brazilian General Data Protection Act effective since February 2020 does not appear in any commit. We also did not include the abbreviation of Data Protection Act, DPA, or more generic privacy relevant terminology, such as data privacy, as the amount of search results is very large (552,000 commits returned for DPA and 17,400 commits returned for data privacy via GitHub search in March 2025) and they may not be linked to privacy laws relevant changes that is our focus. DPA is also used for other purposes, and also as abbreviation e.g. for Digital Process Automation, as a check of some commits shows: • Fluepke/luca-web-clone: “new checkbox for data processing agreement (DPA) in registration" [SHA:2f878ef9e624224722aa073ee71cb8703f6728f1] • iqrfsdk/clibdpa: “Update DPA.h for 410 " [SHA:51f9b9ddb062d7c6857135c911ee9a3ccef84245] Table 1: Dataset size per keyword. • dfath/simbapha: “ basic volume dpa" [SHA:9cacef1a8f32b6e68ec2a6c6adefdb936d1a558a] We also did not focus on security discussions on GitHub, as performed on previous works and have thus, excluded other keywords such as encryption, authentication, authorization [41, 42, 43]. # 3.2. Dataset collection and preliminary analysis Initial data collection. Table 1 shows the initial size of the dataset that we collected per keyword in April, 2023 (columns: collected and keyword sum, no duplicates, after summing the results for each law keyword, e.g. General Data Protection Regulation and GDPR). Data were collected using the GitHub Search API,3 with a search within the text (and comments) of commits. It was chosen as the most suitable option, as we were interested in collecting up to date data [44]. Date intervals were also indicated in the request to ensure that all commits for each keyword would be collected. Since GitHub Search API uses a limit of 1,000 results per search query, we used date and time intervals and called the API numerous times to ensure we would collect all relevant data. Duplicates removal. We removed duplicates using the commit SHA, as we observed that different repositories may have the same commit that has been kept most probably because of forked repositories that have later evolved. We first ordered the commits based on the repository stars number as a proxy of popularity and then removed the duplicates, so less popular repositories with the same commits were filtered out. Stars show whether a user likes the repository or wants to show her appreciation and are a popularity indicator that has also been used in previous works [45, 46]. Date filtering. By inspecting a small number of commit messages, 100 random commits irrespective of the keyword (one of the authors performed this task), we found irrelevant cases, e.g. for CCPA there was reference to a clock tree and irrelevant mentions including ccpa in the name,4 CPRA was used to refer to stations.5 This manual inspection was performed by reading the commit message and, when necessary, by visiting additionally the commit page and examining the changed source code. In order to make sure not to include such cases, we filtered out references to a specific keyword that were made before the signing of the respective law (not its effect date that is later than the dates indicated next): before April 14th, 2016 for GDPR, June 28th, 2018 for CCPA, November 3rd, 2020 for CPRA, May 23rd, 2018 for the UK Data Protection Act. The excluded commits were then verified by another author and no cases that were falsely excluded were encountered apart from one commit indicating CPRA with a date very close to the signing of the law (Oct. 29th, 2020).6 The size of the dataset after the date filtering is shown in the date filter column of Table 1. Additional data collection and unavailable data removal. Since the response of the GitHub Search API did not include all fields needed for our analysis, we collected additional data on repository and on commit level. The data used from each API call type and kept for the subsequent analysis are listed in Table 2. They are coming from 3 different call types: GitHub Search API, GitHub REST API repository information resource, GitHub REST API commit information resource. However, when we proceeded with the repository information collection via the GitHub API (data in the 2nd part of Table 2) that was performed on a later date than the commit collection in July 2023, 311 repositories (from the repositories initially collected) were not returned by the GitHub API (were indicated as not found in the result), so they were removed from the dataset along with their respective commits in order to guarantee the consistency of the results. Some repositories and some commits were also not returned during the commit change API call in July 2023 (data in the 3rd part of Table 2): 36 repositories and 11 commits in repositories from the ones initially collected, and these data were therefore also excluded from the dataset. The final dataset size used in the subsequent analysis is shown in the removed unavailable column of Table 1. Manual verification. In order to verify that the data we had left after the above preliminary analysis indeed contain references to privacy-relevant laws, we performed a manual verification on a statistically representative sample of the dataset with a $9 5 \%$ confidence level (sample function of the R programming language): 381 commit messages for GDPR and 320 for CCPA, apart from the case of CPRA and Data Protection Act, where all 69 and 67 commits from the previous step were examined respectively. This process to manually inspect a total of 837 commits was performed by the two authors. Especially for the case of CPRA where we had encountered a large number of commits before the signing of the law (providing an indication that CPRA is used with a different meaning in many cases), we performed a filtering removing commits from specific repositories (e.g. all repositories under the CPRA-MP organization). For the case of Data Protection Act commits, a manual filtering was also performed: two commits referring to Data protection Act of countries other than UK were removed, four referring to earlier data protection acts (1998, 2012) (example).7 Although the final number of commits for CPRA and Data Protection Act is very small, we kept it in our dataset since CPRA builds upon CCPA and Data Protection Act complements GDPR. For this part of the coding task, there were no disagreements between the two coders: all GDPR and CCPA commits examined were marked as relevant except one case for CCPA where the abbreviation is used differently, $^ 8$ even though it is not a clear case as there is no repository description, so it was decided to filter out all commits from the same repository. CPRA and Data Protection Acts commits were filtered with the same output of filtered commits between the two coders. The dataset size, after applying the above filtering, is shown in the final column of Table 1 with commits coming from a total of 12,391 different repositories. Table 2: Data used in our analysis. # 3.3. Pre-processing and analysis tools For the analysis on repository and commit level, we removed commits returned more than once (by more than one keyword search and same commits appearing in more than one repositories). However, for some analysis where we compare laws (RQ1, RQ2, RQ3), we kept duplicates to capture all keyword appearances. For the commit comment text analysis (RQ3), we performed the following pre-processing steps commonly encountered in NLP works [47]: • In order to filter out code, we removed source code blocks marked with backtick characters: \` [ . . . ] • We removed HTML code blocks (using HTML tags start and end). • We identified external URLs and removed them. • We removed non-latin characters (the respective commits were however, kept), numbers, punctuation and whitespace. • We converted all words to lowercase. • We removed common English stop words from the commit messages. • We removed the following terms that are frequently encountered in commits, as they were considered noise for our case as being commit terminology and not privacy relevant: pull, request, issue, fix, plugin, merge, update, test, branch, json, release, package, variable, unit. We relied on common terminology to create the list considering also an analysis on verbs found in commits.9 For our mainly quantitative analysis, we used statistical analysis with descriptive statistics, and text analysis, whereas manual coding was used for the qualitative analysis of RQ4 and the exact process followed is further detailed in the results description of RQ4. Combining both automated and manual analysis allows us to improve the accuracy of our findings. For implementation purposes, libraries of the R programming language were employed (e.g. dplyr, wordcloud2, tm, ggplot2), whereas some of the descriptive statistics, statistical tests and Cohen’s kappa agreement calculation were run using IBM SPSS Statistics. As in previous works, we measure the size of a commit by counting the (source code) Lines of Code it affected [23]. The changed LOC in GitHub considers both the added and the deleted LOC. # 4. Results # 4.1. RQ1. Main data privacy laws in commits and time We calculated the commits and total changed LOC for each law per year (number of commits depicted in Figure 2 and LOC numbers available in the replication package of the work – please refer to the Data statement). Although a commit may contain additional changes – not limited to privacy laws – the reference to privacy laws provides evidence that a reaction to privacy laws was included in the repository change performed. GDPR is the only law applicable in 2016 and 2017 and has the largest number of commits and changed LOC in 2018 (13,098 or $3 7 . 1 5 \%$ of all commits mentioning GDPR and $4 0 . 0 1 \%$ of all changed LOC for GDPR), which is the year the law came into effect, so it is expected that most repositories decided to perform relevant changes on that year. The changes are also almost equally divided before and after the law effect date for 2018: 6,095 $( 4 6 . 5 3 \% )$ ) before and 7,003 (53.47%) after. For CCPA, most commits are in its effect year 2020 (802 or $4 3 . 0 5 \%$ ) but most LOC were changed in 2021 $( 4 1 . 8 3 \%$ ). The effective date of CPRA is very recent in the beginning of 2023, so we might see more relevant commits after the data collection date. For the case of Data Protection Act, most commits and changed LOC appear in 2022 (29 or $4 7 . 5 4 \%$ of commits and $5 8 . 3 0 \%$ of changed LOC) which may be either attributed to the developments in the UK concerning the Data Protection and Digital Information Bill (introduced in the House of Commons in UK on July 18, 2022), or the changes may refer to acts with same naming in other countries that have come into effect in 2021 and 2022. The first and last commit dates across all repositories are shown in Table 3. The first reference to CPRA is before the introduction of the law, where it appears in the commit message together with GDPR: “[...]Add CPRA and GDPR are coming soon[...]."10 Overall in the dataset there is a small number of commits that refer to more than one laws: 392 commits that mention both CCPA and GDPR which is the most usual case, 10 that mention both CCPA and CPRA, three that mention both DPA and GDPR and one that mentions both CPRA and GDPR. We also examined how long it took for each repository to commit the changes performed by calculating the number of days between relevant commits for each law (using the start and the last dates of relevant commits in each repository). The box plot of Figure 3 shows that some repositories have spent more time on making repository adaptations for GDPR, even though the average values are comparable for GDPR and CCPA: for some repositories it took almost 1,900 days (1,951 is the highest value). There is a large deviation in the days among repositories with mean ( $M$ ) days between first and last commit and standard deviation $( S D )$ shown in the fifth and sixth column of Table 3, while for most cases relevant changes were completed within one day (8,743 or $7 0 . 5 6 \%$ of repositories). The above analysis does not consider the time required to perform the actual code change but gives an indication on whether additional changes were necessary in the period after the first relevant commit. In terms of LOC affected by the commits, each commit affects on average 2,206.91 LOC ( $S D = 4 6 , 9 8 8 . 5 7 5 _ { , }$ ). As would maybe be expected more lines are added to accommodate changes than deleted: $M = 1$ , 720.16, $S D = 4 4 , 6 1 5 . 2 5 3$ for added LOC and $M \ = \ 4 8 6 . 7 4$ , $S D = 1 2 , 8 8 5 . 8 6 6$ for deleted LOC. Each commit may also include code changes outside the privacy laws, so the changed Figure 2: Privacy laws appearing over the years counting number of commits. LOC numbers may appear higher. We also ran a one-way ANOVA test to check whether the law the committer intends to comply with affects the changes that were performed (using changed LOC), with the hypothesis that GDPR requires more changes. Although the results are not statistically significant, the average changed LOC are more for the case of GDPR, as shown in the last two columns of Table 3. Main findings RQ1: For GDPR and CCPA, most commits appear on the year the law was put into effect (2018 and 2020 respectively), while in 2018 GDPR changes are almost equally divided before and after the law effect date. Most repositories did not commit changes on days later than the initial commit, but GDPR changes are taking longer (in days) to commit than changes for CCPA, CPRA and DPA. 4.2. RQ2. Which type of repositories and which programming languages are more active Table 3: Dates of first and last commits in dataset and commits duration in repositories per law. Figure 3: Number of days between first and last commit in repository per law. In this RQ, we calculated which repositories are more active in privacy lawrelated commits, considering the number of commits. In terms of commits number, most repositories have only one relevant commit (7,344 or $5 9 . 2 7 \%$ of repositories as in Figure 4). There are only 54 repositories (0.44%) with 50 or more relevant commits and only five with more than 300. Among the five most active repositories, one is a repository integrating code from many repositories, one focuses exactly on data compliance (ministryofjustice/dps-data-compliance) – so for those two cases a higher number of commits is expected – and three are related with popular frameworks: WordPress (helsinki-systems/wp4nix ), Microsoft 365 documentation (MicrosoftDocs/microsoft-365-docs) and prebid programmatic advertising strategy (8secz-johndpope/PPI ), so the contributors might be more diligent when it comes to privacy compliance. Repositories with only one commit have on average 4,833.67 changed LOC, whereas repositories with more than one commit also have in total more changed LOC: $M = 9 , 0 6 1 . 7 4$ . We had a closer look to those cases of only one commit by manually inspecting a representative sample (366 commits with 95% confidence level) to verify that they are relevant to privacy law changes and this was confirmed. This process was performed by one of the authors. There are 352 commits (from 227 repositories) in our dataset with zero changed total LOC. With manual inspection on a representative sample of 184 such commits with $9 5 \%$ confidence level from those repositories, we observed that these cases refer mainly to file deletions and additions, while some are not source code files and for this reason do not have LOC indication, so we kept them since they are also relevant with data privacy laws and useful to answer our RQs (we verified that the commits were relevant to privacy laws –example).11 This process was also performed by one of the authors. Figure 4: Frequency of privacy law relevant commits in repositories. We examined whether the repository owner type (user or organization) affects the level of activities performed. For this purpose, we run an independent samples t-test using the number of commits as dependent variables and the type of repository owner as independent variable. There is a statistically significant difference showing that commits coming from organization repositories are more than the ones from user repositories ( $F = 8 3 . 3 5 7$ , $p < 0 . 0 0 1$ ), indicating that organizations perform more changes than individually owned repositories that may be experimental or ones serving educational purposes (descriptive statistics in Table 4). Running ANOVA using as independent variable the main programming language of the repository and as dependent the number of commits and the number of days between the first and the last commit (as used in RQ1), in order to test the hypothesis that the language affects the time required to complete relevant changes, a statistically significant different was found ( $F = 1 . 4 1 8$ , $p = 0 . 0 0 2$ for number of commits and $F = 4 . 0 9 7$ , $p < 0 . 0 0 1$ for days between commits). This shows that language constructs might affect the changes required but further investigation is needed to explore other factors, such as framework usage. There are 125 different programming languages indicated as main languages in the repositories we collected with the 10 most frequent programming languages shown in Table 5, whereas there is no language indication for $6 . 6 \%$ of the repositories (819 repositories). The laws that appear in the commits of each programming language, as well as the mean values for the days required to complete relevant commits for data privacy legislation and the commits count are also shown in Table 5. Overall, different laws are linked with each programming language. Table 4: Number of commits per repository owner type. Table 5: Top programming languages of repositories. Main findings RQ2: Most repositories have performed only one commit to accommodate data privacy legislation whereas organization-owned repositories perform more commits. The period between relevant commits is longer for repositories with PHP, Python and Ruby as main programming language. # 4.3. RQ3. Main terms appearing in commits We examined separately for each law other terms appearing in the commits together with the law name and present the results in the form of wordclouds. We have relied on the pre-processing described in the previous section and used a term frequency (TF) matrix with unigrams for this purpose. All wordclouds are depicted in Figure 5. In the wordclouds, we see relevant terms from privacy legislation and required changes: law, delete, privacy, standard, create, new, nist, requirements, analytics, settings, data, consent, remove, compliance, option, notice, change, cookies. More than one law is present in some cases, e.g. GDPR in CCPA relevant commits, CCPA in CPRA relevant commits, so some commits address more than one laws. Although some common words were removed during the pre-processing step, we see some terms on development activities (e.g. contributor). eme personal dele gdpr 2 frontend cpra nist filata information ccpa privacy actmd -reate (a) GDPR (b) CCPA (c) CPRA (d) DPA We also performed a keyword search within the commit message to see if there are any references to the main user rights found in the legislation, as shown with the respective frequencies in Table 6. The rights are grouped based on the aim of each right, but for searching in the commits we also used different terminology of the rights using a keywords list based as starting point on the list suggested for GDPR in [32]. The main privacy laws share terminology in user rights (first and second column in Table 6) but we devised variations also for other rights using our prior experience on privacy policies [33] creating a relevant list, e.g. for right to opt-out and Do Not Sell My Personal Information for CCPA. The keyword variations for right to erasure are listed below and the whole list is available as part of the replication package of the work: • allow deletion, allow (to) delete, right to erasure, right to delete, right to deletion, right of deletion, right of erasure, right to request deletion, right to be forgotten, erase information, request erasure, erase the personal data, erase your personal data, erase their personal data, erase his/her personal data, erase your data, erase their data, erase his/her data, erase any personal data, erase personal data, erase data, right to erase, right to be forgotten, user(s) dalete, user(s) to dalete, data deletion, data delete, delete right, deletion right, deleting right, delete flow, delete request, forget visitor data, forget data We also added the general right of providing consent to data collection that is the case most usually encountered in commit messages (7.35% of commits). Most other rights appear in a limited number of commits. Only the right to erasure (right to request deletion of personal data) is more frequent (1.51% of all commits, with the word delete appearing more times: 1,731 in total), followed by the right to opt-out and the right of access (right to obtain a copy of personal data). Table 6: User rights in legislation and frequencies in commits. Main findings RQ3: In the commit comments, there is presence of some privacy related terms but reference to user rights is scarce, with the right to erasure being the right encountered more frequently followed by the right to opt-out and the right of access. User consent provision is the most important aspect commits focus on. # 4.4. RQ4. Which is the main purpose of a commit Analysis process. In this RQ, we examined the text of a subset of commit messages and the respective code changes in order to get a better understanding of the purpose of the commit. For the manual analysis of commits text, we selected the 70 GitHub projects with the highest popularity in the dataset using stars as a proxy of similarity as also indicated earlier in the text (Table 7). The top 70 repositories we focused on have in total 594 unique commits (with 703,862 changed LOC). The two authors performed the coding task required to answer RQ4. The coding protocol used was as follows. The task of each coder for this part was to read the commit message and the updated files, and classify the commit into one of the following categories: 1) user consent relevant update in source code or text, 2) source code update to cover a user right, 3) privacy policy or FAQ or documentation update, 4) other source code or text update. When present, the pull request linked to the commit was also examined. These categories were created based on the manual verification performed during the dataset creation process when it was observed that some commits refer e.g. to the privacy policy. If the coder encountered however, an additional category she would add it in the list, without consulting the other coder, but in the end the two coders discussed the results. Thus, a bottom-up approach was partially used in the category creation. Each coder would assign each commit to one category as basic category of the commit (even if the commit would also be related to a secondary category, one was provided as main). The independent coding task lasted approximately seven hours for each coder. Both coders are computer scientists with expertise in empirical software engineering research, while one of them has expertise on GDPR. Table 7: Top 70 most popular repositories in dataset for manual analysis. \*as of August 2024 After the coding task was completed an additional category was added merging categories on cookies/ads/trackers/analytics, as these were indicated by the two coders. They were placed under one category, as they have a similar purpose. This category covers also consent for cookies and cookie preferences. Both coders noticed that some cases were also relevant to security techniques, e.g. encryption, anonymization, but we did not create a separate category due to the small number of such cases (these cases were added in the other source code or text update category). There was an agreement meeting between the two coders, where they discussed cases of disagreement and agreement was reached. Before reaching agreement, Cohen’s kappa was $k a p p a = 0 . 8 5 3$ showing a very good agreement between the two coders, as a kappa value higher than 0.81 is considered almost perfect agreement [48]. Results. The results are shown in Table 8, along with some examples of commit messages for each category. The vast majority of commits are placed under privacy policy or FAQ or documentation update, whereas some commits accommodate a variety of changes with a mix of changes. We observed that in many of these cases changes performed were in isolated source code files and it was not feasible to link the change with specific content (i.e. specific user rights, law principles) of the law. These changes included, for instance, UI updates, style updates, terminology updates, adding reference to GDPR information. Some commits refer also to changes so that users of the repository can use adaptations to GDPR, e.g. in WordPress/WordPress and microsoft/vscode (Visual Studio Code - Open Source) repositories. In the latter repository case, most commits refer to adding GDPR comments or GDPR annotations. In the case of the MicrosoftDocs/azure-docs repository, all 235 commits refer to documentation updates, so the number of commits under this category appears higher. Although this is not the typical case, it affects the results of the manual labeling process of this RQ. If we disregard this repository (there are 33 remaining documentation update-relevant commits from 20 of the remaining repositories), the category of source code updates to cover a user right is the most typical. Main findings RQ4: In the top 70 repositories of the dataset in terms of stars count, commits refer to user content updates, user rights updates, policy/FAQ/documentation updates, cookies/ads/trackers/analytics updates, and other updates, with policy/FAQ/documentation updates, other updates and user right updates being the most common categories. # 5. Discussion # 5.1. Findings Privacy laws indicated. GDPR is by far the most frequently encountered law. This is expected as it as the law that changed the privacy landscape and started a domino effect towards the update of privacy laws in countries outside EU. CCPA is also more recent than GDPR, but we do not expect relevant commits to reach a comparable number within the next years as we observed that most commits for each law appear close to the law’s effect year (RQ1). Number of commits. Most repositories make reference to privacy legislation in a small number of commits: most repositories performed only one commit and did not devote more than one day in law-relevant commits when we consider the date the commits were performed, as shown in RQ1 and RQ2 results. The number of days between first and last commit is longer for GDPR in some repositories and one reason for this might be that GDPR as the first and main data privacy law caused some uncertainty in how to include some of its provisions in software systems (ambiguity has been identified as a challenge to achieve legal compliance [49]). On average, more LOC were added for GDPR that may also be attributed to the fact that subsequent changes (e.g. for CCPA) may have already been covered by GDPR-relevant changes. Comparing these changes with the total number of commits in the most popular repositories of the dataset (RQ4), even repositories with more commit activity than others do not refer to privacy legislation in many commits (Table 7). This indicates that repositories tend to complete relevant changes in bulk and no need to come back for updates arises. Our dataset covers a large number of programming languages compared to the total number of programming languages on GitHub: in 2022, developers used 500 primary languages according to Octoverse report.12 We observed that the average number of commits among programming languages differs but it is not clear whether this is attributed to attributes of the language or whether it is coincidental. Table 8: Main purpose of commits of the 70 most popular repositories in datase User rights indication. Overall, we found a limited number of user rights in commit messages. Most repositories take actions to ensure users provide their consent on data collection or on the privacy policy overall. The right to erasure is the user right mainly found (e.g. indicated with the terms: delete requests, forget visitor data, delete flow, deletion of users). Although it is not the only one that requires changes to provide the user the possibility to exercise that right, this result might indicate that it is one of the rights most frequently exercised by the end-users and the developers want to ensure they have a mechanism in place. In the simplest case many user rights can be exercised via e-mail requests but some systems automate the procedure providing dedicated forms. Moreover, obtaining and deleting data are the rights users are mainly aware of, whereas the right to avoid automated decision-making is not applicable to most software systems. Other rights found are the right to opt-out, right of access, and right to restriction of processing. The right to information is also an important right but is usually handled by adding relevant information in the privacy policy of a system, so keyword based search is less suitable for detecting it in the commit messages. Policy updates are nevertheless, very common according to the results of RQ4. Detailed commit messages. We observed that developers are not very specific when it comes to the purpose of the changes performed, e.g. many mention only GDPR compliance. This absence of reference may be attributed to the fact that a commit performs a variety of changes to consider the privacy law as a whole without referring to specific parts of the law. It would be useful to have more informative messages to assist other developers on relevant changes, since repositories might also be used for educational purposes [50]. Previous works have examined characteristics of good commit messages [51]. Cookies. Cookies are an important area of data collection that requires user consent, although this is not always handled as required. Examining GDPR compliance among 20,218 third-party cookies, it was found that only $1 2 . 8 5 \%$ have a cookie policy where a cookie is mentioned [52]. In our dataset, the term cookie(s) and its composite terms (e.g. cookiebanner, gdpranalyticscookie, cookienotice, after removing punctuation) appears in 2,384 or $6 . 4 8 \%$ of commits, indicating that this is an area where updates for privacy legislation are usually required. The words policy, consent and cookie were identified as main terms that increased in frequency in posts after the introduction of GDPR and CCPA in Reddit discussions on mobile application [10]. Cookies were indicated among the categories we identified in the manual analysis in RQ4. # 5.2. Implications for the future Educational activities. The focus on specific user rights and the absence of others, as one of the main findings discussed in user rights indication in the previous subsection, may show that more information is available about the need of software systems to include those rights (i.e. right to erasure, right to opt-out, right of access, and right to rectification) but indicates also the general need for more educational activities towards informing software engineers for data privacy law user rights (in the results of the manual coding of the commits of the top 70 repositories in RQ4, 67 commits cover user rights). Such activities could be integrated in software engineering curricula. A prior work notes the need of legal professionals’ education to include a better understanding of the underlying technologies [53] but the other direction with software engineers acquiring deeper legal understanding is also required. Automatic detection of privacy laws. Detecting automatically whether there is reference to privacy laws in development artifacts (e.g. source code, commit messages, pull requests) can be useful for understanding which aspects of the laws are more present and addressed in software repositories and which are encountered less usually. This can also show the type of features relevant to privacy offered to users by software systems (e.g. how frequent automated decision making actually is). There is an existing line of work that performs similar activities on the text of privacy policies for the detection of the areas covered in the text: for automating the policy text analysis [31] or for examining compliance to legislation, such as GDPR [32, 33]. Automated privacy recommendations. Our results show also that tools that assist practitioners in adding privacy safety measures in their source code are necessary. More usable compliance-checking tools were indicated as necessary, when it comes to COPPA, GDPR and CCPA [30]. Another previous work has analyzed the text of issues in order to recommend good first issues to newcomers with the aim of helping the maintainers of the repository and help newcomers get acquainted with the process [54]. Similarly, existing good practices could be suggested by existing contributors to practitioners that attempt to integrate data privacy mechanisms, or other legislation principles, in their code. The area of Privacy as Code concerning also the generation of privacy-friendly code is now at its infancy as indicated by Ferreyra et al. [55]. Verifying compliance with privacy laws. The most active year is the law’s effect year but many changes were performed also in subsequent years (Figure 2), so developers may need access to appropriate tools to assist them to act sooner. Investigating how Large Language Models overall (e.g. GitHub Copilot, Tabnine) can assist developers towards privacy legislation compliance would be useful [56, 57, 58]. The actual compliance with the privacy laws is difficult to detect, and the current work did not verify if the changes performed in the repositories were appropriate for compliance, as the source code was not executed. In a previous work in the area of mobile applications, the actual network traffic of 109 applications that need to comply with CCPA was compared with the data contained in responses to consumers’ access requests [59]. It was found that at least 39% of the applications shared device-specific identifiers and at least $2 6 \%$ geolocation with third parties but did not disclose this in the request responses. We believe thus, that putting more research efforts towards examining compliance at various levels can assist in guiding end-users to trust or not trust specific software applications. # 6. Threats to validity External validity. It refers to the extent we can generalize our findings and is not expected to affect our work, as we used GitHub, the main source available to perform analysis on our main research question. We did not perform any filtering of the commits based on the total number of repository commits, as performed in previous works [24], as we wanted to capture as many commits that refer to data privacy legislation as possible. The low number of commits for CPRA and Data Protection Act in the dataset may be less helpful in drawing conclusions for the specific laws. We made the assumption that commit messages with no country of year indication within this date range are referring to UK Data Protection act. Although some cases may refer to other countries, the commits are referring to privacy legislation and can thus, be considered relevant. In RQ4, the results may differ if a larger sample of commits is examined. Internal validity. We relied on the commits’ content under the assumption that the commits refer to the keywords we were interested in. In order to limit the threat of introducing irrelevant commits, we performed appropriate filtering (i.e. filtering out data of dates earlier than the law introduction date) and manually examined a number of commits (837 commits). We may have missed though false negative cases in the data collection, where the commit message does not include the law name but makes a reference to a relevant issue that describes it. Making a GitHub-wide commits collection and analysis is nevertheless, a tremendous task practically. Construct validity. It measures the degree to which we measure what we claim. When calculating the time required to perform changes, we were not able to measure the time spent to perform the actual source code changes, as this information is not available. We used instead the commit dates to see how long it took to commit all changes. For CPRA this number of days between first and last commit per law may need to be verified with newer data, since its effect date was close to the data collection time. Changed LOC in each commit may include also other changes not relevant to privacy laws, so LOC numbers reported may be slightly inflated. Both commits number and LOC are reported to limit this threat; the commits numbers correspond to privacy relevant changes, as we did not encounter any other cases during the manual verification step, apart from one case for the CCPA keyword accounting only for $0 . 1 2 \%$ of the examined cases. Regarding user rights, we used the terminology from the relevant legislation, so we may have missed references to the rights with different terminology within the message. To limit this threat, we used variations in the terminology that we created relying also on [32] for GDPR terms. Conclusions validity. The vast majority of the commits have a reference to GDPR, so there might be a bias in our results towards GDPR. We argue though, that since subsequent privacy legislation relied on GDPR, the conclusions are valid for the current view of the data privacy legislation landscape as a whole.
Free and open source software has gained a lot of momentum in the industry and the research community. The latest advances in privacy legislation, including the EU General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), have forced the community to pay special attention to users' data privacy. The main aim of this work is to examine software repositories that are acting on privacy laws. We have collected commit data from GitHub repositories in order to understand indications on main data privacy laws (GDPR, CCPA, CPRA, UK DPA) in the last years. Via an automated process, we analyzed 37,213 commits from 12,391 repositories since 2016, whereas 594 commits from the 70 most popular repositories of the dataset were manually analyzed. We observe that most commits were performed on the year the law came into effect and privacy relevant terms appear in the commit messages, whereas reference to specific data privacy user rights is scarce. The study showed that more educational activities on data privacy user rights are needed, as well as tools for privacy recommendations, whereas verifying actual compliance via source code execution is a useful direction for software engineering researchers.
[ "cs.SE" ]
# 1 Introduction The iterative process of coding, involving frequent edits and insertions (Bavarian et al., 2022; Fried et al., 2023), establishes Fill-in-the-Middle (FIM) code generation a prevalent task in code completion. Models tackling this must generate the missing code segment conditioned on both the preceding (left) and succeeding (right) context. A key challenge in FIM lies in seamlessly integrating the generated middle with the subsequent code while maintaining both structure and meaning – a non-trivial learning objective for models. Consequently, raw model outputs often undergo rulebased post-processing to remove extraneous content. As shown in Table 1, two widely used FIM code generation evaluation benchmarks employ specific truncation rules that may not generalize to real-world FIM scenarios with arbitrary left and right contexts. Furthermore, such truncation strategies often fail to account for alternative, yet valid, ways of generating the missing code. For instance, as illustrated in Figure 1, a single-line infilling task might expect one line as a solution, but an LLM could generate five lines that perfectly match the surrounding context. In this case, truncating the generated middle to a single line would incorrectly mark it as a failure. Given the advancements in code LLMs, a crucial question emerges: do modern code LLMs naturally know when to stop generating given any arbitrary left and right context, thereby eliminating the need for post-processing techniques like truncation? The existing body of work (Bavarian et al., 2022; Fried et al., 2023; Nguyen et al., 2023; Zheng et al., 2024) predominantly examines base LLMs, trained on massive amounts of data to understand language patterns and generate consistent output. These models acquire Fill-in-the-Middle (FIM) capabilities by learning from reordered prefix-middle-suffix sequences, created via random splits of the training data. The purpose of this reordering is to allow the LLM to auto-regressively predict the middle segment, conditioned on both the left and right contexts as past information. In contrast to base LLMs, we posit that instruction-tuned LLMs are better equipped for FIM generation due to their customized nature and their inherent capacity to adhere to instructions. Our primary motivation for focusing on instruction-tuned LLMs stems from the objective to avoid the expensive pre-training (or their continuation) required by models like those in (Bavarian et al., 2022), which demonstrated that fine-tuning with FIM does not achieve the same performance as pre-training with FIM. This study investigates the necessity of postprocessing instruction-tuned LLM outputs for FIM code generation. Our empirical analysis reveal that the raw outputs of off-the-shelf instruction Table 1: Truncation strategy used in two popular FIM benchmarks. # Canonical solution def even_odd_count ( num ): 2 """ [ docstring truncated ] """ 3 even_count $\quad = \ 0$ 4 odd_count $= 0$ 5 f o r i in s t r ( abs ( num )): 6 i f i n t ( i ) $\% 2 = = 0$ : 7 even_count $+ = 1$ 8 e l s e : 9 odd_count $+ = 1$ 10 r e t u r n ( even_count , odd_count ) # FIM by our finetuned model def even_odd_count (num): 2 "[ docstring truncated ]""" 3 even_count = 0 4 odd_count = 0 5 i f num $\mathit { \Theta } < \mathit { \Theta } 0$ : 6 num $\mathbf { \tau } = \mathbf { \tau }$ s t r (num) [1:] 7 e l s e : 8 num $\mathbf { \tau } = \mathbf { \tau }$ s t r ( num ) 9 f o r i i n num: 10 i f i n t ( i ) $\% 2 = = 0$ : 11 even_count $+ = 1$ 12 e l s e : 13 odd_count $+ = 1$ 14 r e t u r n ( even_count , odd_count ) tuned LLMs often require editing. Consequently, we fine-tuned both base and instruct versions of Qwen2.5-Coder. Our findings demonstrate that these fine-tuned models can produce outputs that do not require any post-processing when the middle code segments consist of whole lines. In fact, applying any preset, heuristic-based post-processing in such cases actually leads to incorrect middle outputs. However, when middle segments comprise partial lines, it becomes necessary to truncate overlapping code segments. Based on our findings, we offer straightforward post-processing recommendations for LLM-generated middle code segments. In summary, we contribute the followings: 1. We show that off-the-shelf instruction-tuned LLMs require post-processing for effective FIM code generation and exhibit suboptimal performance due to a lack of task-specific finetuning or optimization. 2. We demonstrate that lightweight fine-tuning significantly boosts LLM performance for FIM generation. Interestingly, when the middle code consists of complete lines, the raw outputs from these fine-tuned models achieve better automatic evaluation scores than post processed outputs, meaning no further editing is needed. However, if the middle includes partial lines, post-processing is still required. # 2 Instruction-tuning of LLMs for Fill-in-the-Middle Code Generation We investigate the FIM code generation accuracy of state-of-the-art instruction-tuned code LLMs by prompting them with instructions, as illustrated in Figure 3. This prompting method is consistent with their standard usage for code generation. Our findings in subsection 3.3 reveal that instructiontuned LLMs perform suboptimally, even after their outputs undergo dataset-specific post-processing. Building on this observation, we further investigate if lightweight supervised fine-tuning can empower code LLMs for improved FIM generation. To achieve this, we created a training dataset of instruction-response pairs using an LLM. First, we collected a set of Python functions from GitHub, following the data collection pipeline detailed in Wei et al. (2024). This involved a rigorous filtering process: type checking with Pyright, removal of benchmark items, elimination of poorly documented functions, and deduplication. Using these collected functions, we employed a straightforward approach to generate instruction-response pairs. Specifically, we prompted Mixtral- $\mathbf { \delta } 8 \mathbf { x } 2 2 \mathbf { B }$ (Jiang et al., 2024) with the template shown in Figure 2, asking it to split each function into prefix, middle, and suffix according to one of five strategies outlined in the prompt. After generating the prefix, middle, and suffix, we verified that their concatenation reconstructs the original function. At the end, we collected ${ \approx } 1 { \mathbf { M } }$ instruction-response pairs that we used to finetune code LLMs. # 3 Experiments # 3.1 Setup Training & Inference Setup We fine-tuned the 7B, 14B, and 32B parameter base and instruct versions of Qwen2.5-Coder. The finetuning spanned 5000 steps on NVIDIA H100-80GB GPUs, leveraging the AdamW optimizer (Kingma and Ba, 2015) with a batch size of 256 and a maximum sequence length of 4096 tokens. We initialized the learning rate at 5e-6 and applied a CosineAnnealing scheduler with a $10 \%$ warmup. We utilized tensor parallelism and BF16 precision to accelerate the training process. For evaluation, we utilized the final training checkpoint, and during inference, we employed greedy decoding. Evaluation Benchmarks and Metrics We evaluated models using two FIM code generation benchmarks: HumanEval Infilling (Bavarian et al., 2022) and SAFIM (Gong et al., 2024). The HumanEval Infilling benchmark features three distinct tasks: Single-line, Multi-line, and Random span infilling. We provide the post-processing functions for these tasks in Figure 4. In contrast, SAFIM is a syntaxaware FIM benchmark, consisting of tasks focused on algorithm block, control flow, and API function call completion. For both benchmarks, we present results based on the standard pass $\ @ 1$ metric.1 # 3.2 Research Questions We aim to address the following questions. 1. What is the out-of-the-box effectiveness of instruction-tuned code LLMs for fill-in-themiddle (FIM) code generation? 2. Can supervised fine-tuning significantly improve the FIM generation accuracy of code LLMs? How does finetuning impact base and instruct version of LLM? Furthermore, what are the effects of such fine-tuning on base vs. instruct-tuned LLMs? 3. Are the raw outputs of fine-tuned LLMs sufficiently effective for automatic evaluation? # 3.3 Results The results are presented in Table 2. We consistently observed a few performance trends. Instruction-tuned LLMs are not ready out-ofthe-box The Qwen2.5-Coder-Instruct models consistently perform poorly on both benchmarks, particularly on the SAFIM and random-span HumanEval infilling tasks. Their low accuracies clearly indicate that these models cannot be effectively used off-the-shelf in FIM generation. Supervised finetuning (SFT) is a major leap for FIM instruction-tuning The overall results clearly indicate a significant performance boost with SFT of Qwen2.5-Coder-Instruct models. The average performance of the 7B and 14B models doubled, while the 32B models saw an impressive $40 { - } 5 0 \%$ improvement compared to their ofthe-shelf counterparts. Sample efficiency of base vs. instruct LLMs The average pass $\textcircled { a } 1$ accuracies across both benchmarks suggest that tuning instruction-following LLMs yields slightly better performance. Raw outputs of finetuned LLMs are effective From Table 2, we observe that post-processing consistently lowers accuracies for single-line and multi-line infilling tasks in HumanEval benchmark as shown in Figure 1. However, for random-span infilling, raw LLM outputs do require editing, which is evident from the resulting improved performance after post-processing. We see a similar pattern for the SAFIM benchmark. Based on our observations and experiment results, we recommend post-processing to remove overlapping code segments found between the prefix and the generated middle, and similarly between the middle and the suffix. This is our standard approach for all infilling tasks in this work. # 3.4 Other Findings Our experiments showed that generating multiple FIM samples from a single Python function (resulting in 5M instruct-response pairs) didn’t significantly improve supervised fine-tuning. Thus, we suggest future work prioritize diversity in Python functions over generating many samples from one. Table 2: Performance comparison of Qwen2.5-Coder-Instruct models across three different sizes. SL, ML, and RS indicate “single-line”, “multi-line”, and “random-span” infilling tasks, respectively. Highlighted rows show our finetuned models’ performances. Bold indicates the highest performances for each model groups. Additionally, fine-tuning models for more than roughly one epoch degraded performance on downstream FIM tasks. Therefore, we recommend using a larger collection of training samples, but with only a single training iteration over them. # 4 Related Work Bavarian et al. (2022) presented a foundational approach to training large language models (LLMs) for FIM code generation, marking a significant first step in this area. Their core innovation involved segmenting unlabeled code into three distinct parts and rearranging those segments to create training sequences. This pioneering strategy proved highly influential, shaping nearly all subsequent research in FIM code generation (Fried et al., 2023; Zheng et al., 2024; Wu et al., 2024; Sagtani et al., 2025). In contrast to this dominant paradigm, Nguyen et al. (2023) introduced an alternative method. They trained two separate language models, each generating code in an opposing direction: one from left-to-right and the other from right-to-left. The FIM task was then solved by having these independently generated segments converge and “meet” in the middle. More recently, Ding et al. (2024) departed from these approaches, showing improvements by adopting a planning and lookahead based approach to language generation. To the best of our knowledge, the existing body of work in FIM code generation has primarily focused on either pre-training base LLMs or exploring alternative architectures and training methodologies. A significant gap in the existing literature is the lack of focused investigation into the intrinsic FIM capabilities of instruction-tuned LLMs – models already adapted for following instructions. Our work aims to bridge this gap by specifically evaluating and enhancing the FIM performance of models that have already been fine-tuned for instruction following, offering a novel perspective on leveraging these readily available and powerful models for this crucial code completion task.
Post-processing is crucial for the automatic evaluation of LLMs in fill-in-the-middle (FIM) code generation due to the frequent presence of extraneous code in raw outputs. This extraneous generation suggests a lack of awareness regarding output boundaries, requiring truncation for effective evaluation. The determination of an optimal truncation strategy, however, often proves intricate, particularly when the scope includes several programming languages. This study investigates the necessity of post-processing instruction-tuned LLM outputs. Our findings reveal that supervised fine-tuning significantly enhances FIM code generation, enabling LLMs to generate code that seamlessly integrates with the surrounding context. Evaluating our fine-tuned \texttt{Qwen2.5-Coder} (base and instruct) models on HumanEval Infilling and SAFIM benchmarks demonstrates improved performances without post-processing, especially when the \emph{middle} consist of complete lines. However, post-processing of the LLM outputs remains necessary when the \emph{middle} is a random span of code.
[ "cs.SE", "cs.CL" ]
# 1 Introduction The integration of Machine Learning (ML) into various domains has revolutionised fields ranging from manufacturing to finance, offering unprecedented capabilities to learn from data and make accurate predictions. In safety- and ethic-critical domains such as healthcare, the ability to not only accurately predict outcomes but also to understand the decision-making process is crucial for ensuring transparency and trustworthiness [25]. Consequently, among various models and learning paradigms, a typical distinction is often drawn based on model interpretability into black-box and white-box models [24]. Black-box models achieve high accuracy by leveraging complex and often opaque relationships between inputs and outputs, while white-box models, such as rule-based systems, explicitly detail how inputs are transformed into outputs through predefined rules [18]. These rules can be derived from domain knowledge or extracted from data using ML methods like decision trees and logic learning machines [23,13], or derived from black-box models by rule extraction [24]. In high-stakes contexts such as the clinical domain, where maintaining control over the decision-making process is paramount, rule-based predictors remain prominent and preferred [22]. However, as datasets accumulate and domain knowledge expands, these rule-based models can grow complex, often comprising hundreds of rules and patterns. Consequently, discerning which features and combinations thereof are most crucial for effective classification becomes challenging [26]. To address this complexity, each rule-based ML model has developed its own feature importance strategy, such as Gini importance for decision trees and relevance scores for logic learning machines [20,6]. However, such metrics cannot explicitly identify which feature combinations and interactions are most relevant to the problem at hand, which may make the interpretation of the model harder for domain-expert users. In this context, another challenge is to compare different rule sets, for instance, obtained from different models or over different datasets, as they typically vary in rule size and rule structure [11,13]. Performing rules pairwise comparisons can quickly become computationally impractical for large rule sets and requires defining a measure of distance between rules to facilitate meaningful comparisons, which can be challenging [10,2]. Given the critical role of rule-based classifiers in high-stakes domains and the interpretability challenges they can present, we ask the following questions: $-$ (a) what features contribute the most to rules in rule-based classifiers? $\mathit { \Pi } - \mathit { \Pi } ( b )$ what feature combinations contribute to rules in these classifiers? $\mathrm { ~ - ~ }$ (c) how similar/different are two rule sets over the same features with respect to the features’ contribution to their rules? To tackle these questions, we propose a comprehensive and generalised framework for estimating feature contributions in rule-based systems which includes: a graph-based feature visualisation strategy to explore relationships between features within rule sets across entire datasets or individual classes; – a distance metric for comparing rule sets based on feature contributions, independent of rule size, facilitating comparisons across ML models and classes; – a novel feature importance metric derived from the proposed feature graph that is agnostic to rule-based predictors and computationally efficient. The proposed approach is extensively evaluated on both synthetic and benchmark datasets to assess its effectiveness in detecting relevant features and important feature interactions. A detailed analysis of two clinical datasets showcases the practical utility of our method, particularly in real-world biomedical applications. An implementation of the method is made available on GitHub for public access at https://github.com/ChristelSirocchi/rule-graph. # 2 Related work Rule-based systems are essential in high-stakes domains due to their inherent interpretability, which can become compromised by a large number of rules. A variety of visualisation strategies, distance metrics, and feature importance scores have been proposed to assess feature contribution in rule-based systems. Section 2.1 overviews algorithms for deriving rule sets, encompassing rule-based models learning rules from data and rule extraction methods approximating black-box models. Section 2.2 summarises available approaches for representing and visualising rules, Section 2.3 reviews existing strategies for comparing rule sets, while Section 2.4 examines feature importance and selection strategies. # 2.1 Rule-based systems Rule-based systems, originally termed expert systems due to their role in replacing or assisting human experts in knowledge-intensive tasks, offer distinct advantages by explicitly representing knowledge as rules: (a) enabling domain experts to curate, update, and refine the rule set; (b) deriving new knowledge through inference engines; and (c) providing clear explanations to users for predictions [18]. Amidst the proliferation of black-box ML models, the transparency and explainability provided by rule-based systems have become particularly valuable. Consequently, rule-based systems remain widely used, especially in applications where consistency and transparency in decision-making are critical, such as expert systems for clinical decision support, assisting healthcare providers in applying clinical guidelines and treatment protocols [19]. Despite the improved predictive accuracy obtained by deep learning models, only a few are FDAapproved, and most predictive systems in healthcare remain rule-based. Expert systems can leverage both expert-defined rules and rules learned from data. Decision Trees (DT) partition data recursively based on features, effectively handling mixed data types and non-linear relationships, though they risk over-fitting without proper pruning [23]. Repeated Incremental Pruning to Produce Error Reduction (RIPPER), rooted in association rule mining, iteratively constructs rule sets by refining conditions to minimise classification errors, generating concise rule sets but struggling with noisy or irrelevant features [4] Logic Learning Machine (LLM) uses shadow clustering and Boolean algebra to efficiently implement the switching neural network model and derive concise sets of rules [6]. Fuzzy systems employ fuzzy logic to represent uncertainty and model imprecise data through degrees of membership in linguistic terms [21]. To reconcile the accuracy of black-box models with the need for interpretability, symbolic knowledge extraction techniques have emerged, deriving interpretable rules from trained models. Provided that the extracted rules reflect the black-box behaviour with high fidelity, they can serve as an interpretable surrogate model or as a basis for constructing explanations. Extraction algorithms are categorised into tree-based methods, like Classification and Regression Trees (CART), which recursively partition the feature space, and hypercube-based methods, such as ITER and GridEx, iteratively expanding in the input space [24]. # 2.2 Rule sets representation and visualisation To understand the relationship between rules and features in rule sets, various strategies have been proposed. Common rule set representations include lists and trees, with recent advancements introducing a layered graph structure, where the input layer represents input features, the conjunction layer the rule antecedents, and the output layer the class labels. Connections between input and conjunction layers denote condition judgements, while connections between conjunction and output layers signify mappings between rule antecedents and consequents [14]. Other graph-based visualisations are used in fuzzy systems, with rules as nodes and edges representing interactions between rules at the inference level in terms of rule co-firing [21]. In association rule mining, relationships between rules and features are represented as bipartite graphs (i.e. with nodes only sharing edges with nodes of the opposite type) and edge weights and node widths reflecting contributions computed as support, confidence, and lift [8]. However, these methods do not emphasise the relationships among features. Visualising feature interaction can be achieved by projecting the rule-feature bipartite graph onto the feature dimension. Available projection strategies are: (a) simple weighting, with edges weighted by the frequency of common associations, (b) hyperbolic weighting, addressing the decreasing marginal contribution of additional links, and (c) resource allocation, which assumes each node has a certain amount of resources [28]. However, these strategies are defined for unweighted bipartite graphs and established weighted counterparts are still lacking. Moreover, these projections often cause unique relationships between nodes of different sets to disappear. In projecting rule sets onto feature graphs, accounting for edge weights connecting rules and features and preserving unique relationships is crucial, requiring a novel dedicated projection strategy. # 2.3 Rule sets distance methods Evaluating the (dis)similarity of knowledge bases represented as rule sets plays a crucial role in several tasks: integrating diverse information sources, evaluating coherence with existing knowledge, ensuring backward compatibility during knowledge updates, detecting and eliminating duplicates or redundancies, and monitoring knowledge changes over time, highlighting evolving trends, emerging patterns, or shifts in underlying concepts [11]. Additionally, comparing rule sets over different dataset splits enables assessing model consistency [13]. Current methods for comparing rule sets typically involve two main approaches: one compares each rule in one set against all rules in another set, while the other matches each rule with the most similar rule in the other set based on criteria such as common rule outcomes [10] or pattern matching algorithms [2]. These methods require defining a rule (dis)similarity criterion and often evaluate all possible pairwise rule comparisons, leading to substantial computational costs. Furthermore, these methods often assume that rules within each set are mutually exclusive, i.e. only one rule fires for each observation, which does not hold for some rule-based models like LLM. Given these challenges, recent efforts have adopted a bag-of-words strategy to provide a vector representation for rules and used cosine similarity for rule set comparison, effectively overcoming computational challenges [13]. However, this approach does not quantify the contribution of features to rules, highlighting the need for more comprehensive methods for rule set comparison. # 2.4 Rule sets feature importance Feature importance analysis plays a crucial role in improving model interpretability by pinpointing the most relevant input features and supporting feature selection efforts. In biomedical applications, feature importance strategies can enable clinicians to classify patients with distinct phenotypic characteristics using a small set of signature genes and potential biomarkers, facilitating the development of personalised screening tests and customised treatment [5]. A model-agnostic strategy for evaluating feature contributions is permutation importance, which measures the reduction in model performance when the values of a feature are randomly shuffled [1]. However, it can be biased toward correlated features and can be sensitive to dataset size, permutation number, and chosen performance metric. The field of XAI has introduced model-agnostic methods based on local explanations to uncover feature contributions to individual predictions. LIME (Local Interpretable Model-agnostic Explanations) perturbs the input data around a specific instance and fits a local interpretable model to approximate the black-box model locally, while SHAP (SHapley Additive exPlanations) uses Shapley values from cooperative game theory to quantify the contribution of each feature across all possible feature subsets [16]. In the context of rule-based predictors, dedicated strategies have been proposed. For instance, features can be selected based on their presence or frequency in rule antecedents without considering their impact on rule effectiveness [3]. In fuzzy sets, the impact of each feature on fuzzy rules is evaluated using functions that estimate how well the feature predicts a class label [22]. Association rule mining relies on metrics like support, lift, and confidence to evaluate rule quality, primarily aimed at selecting rule subsets rather than analysing feature importance [27]. Explicit feature importance scores are defined for LLM and DT. LLM computes the combined relevance of each condition containing that feature [13] while DTs use information gain and entropy-based methods, such as Gini importance, to calculate the reduction in impurity or entropy brought by a feature across all nodes that use it to split the data [20]. Additional methods for identifying relevant features have been developed in the field of feature selection, categorised based on their relationship with the predictor. Filter methods use statistical metrics such as correlation or variance and, while computationally efficient, do not offer insights into how the model uses features. In contrast, wrapper methods search for a feature subset that optimises a predefined criterion when the algorithm is trained on this subset and pose challenges such as the need for an appropriate criterion and high computational costs [26]. Notably, since feature selection is often treated as a step-wise process, these methods often fail to exploit the interaction between features. # 3 Proposed approach Starting from a layered rule representation akin to Liu et al. [15], we consider three sets of nodes: features, rules, and classes. In our approach, connections between feature nodes and rule nodes are weighted edges, indicating the contribution of each feature to the corresponding rule (feature relevance). Similarly, connections between rule nodes and class labels are weighted edges, representing the contribution of each rule to class prediction (rule relevance). We propose a projection strategy to map this tripartite graph onto the feature set such that edges between features reflect their shared contribution to the same rules, and the centrality of each feature reflects its overall importance across all rules and serves as a feature importance metric. We extend this strategy to construct class-specific feature graphs and define a distance metric to compare graphs. Rule set. Let $\mathcal { D }$ represent a dataset comprising $d$ samples denoted by ${ \pmb x } _ { s }$ , with $s$ from 1 to $d$ . Each sample is described by $m$ input features, defined over the feature set $\mathcal { V } = \{ v _ { 1 } , v _ { 2 } , \ldots , v _ { m } \}$ . For each input $\pmb { x } _ { s }$ , $y _ { s }$ denotes the corresponding target. In classification tasks, the target takes discrete values in $\mathcal { T } = \{ t _ { 1 } , t _ { 2 } , \ldots , t _ { r } \}$ . Then, a rule set $\mathcal { R }$ can be defined over $\mathcal { D }$ , mapping instances to targets, and consists of a set of rules each denoted by $R$ . If $\mathcal { R }$ has $n$ rules, then $\mathcal { R } = \{ R ^ { 1 } , R ^ { 2 } , \ldots , R ^ { n } \}$ . Each $R$ is a logical expression where the antecedent or premise of the rule is a set of conditions over the features of the dataset, while the consequent is the outcome (here class assignment) when all conditions specified by the antecedent are met. Formally, a rule $R ^ { k }$ can be denoted as a pair $R ^ { k } = ( C ^ { k } , T ^ { k } )$ , with $C ^ { k } = \{ c _ { 1 } ^ { k } , c _ { 2 } ^ { k } , . . . , c _ { q } ^ { k } \}$ and $T ^ { k } \in \mathcal T$ , where $C ^ { k }$ denotes the set of $q$ conditions in the rule and $T ^ { k }$ is the target associated with that rule: $$ c _ { 1 } ^ { k } \wedge c _ { 2 } ^ { k } \wedge \cdot \cdot \cdot \wedge c _ { q } ^ { k } \implies T ^ { k } $$ Let $\scriptstyle { \mathcal { Z } } _ { \boxed { \square } }$ be a function that computes the relevance of a feature $v$ for a rule $R$ in a dataset $\mathcal { D }$ . For a rule set $\mathcal { R }$ , a feature relevance matrix $_ { P }$ can be defined as the $n \times m$ matrix, with $n$ the number of rules in $\mathcal { R }$ and $m$ the number of features, of the elements $\left[ p _ { i j } \right]$ , where $$ p _ { i j } = \mathcal { I } _ { \neg } ( \mathcal { D } , v _ { i } , R ^ { j } ) \quad \forall v _ { i } \in \mathcal { V } , \forall R ^ { j } \in \mathcal { R } . $$ Additionally, let $\boldsymbol { \mathcal { T } } _ { \boldsymbol { \nabla } }$ be a function that computes the relevance of a rule $R$ in a rule set $\mathcal { R }$ over a dataset $\mathcal { D }$ . For a rule set $\mathcal { R }$ , a rule relevance vector $\mathbf { \pmb q }$ of length $n$ can be defined with elements $q _ { j }$ , where $$ q _ { j } = \mathcal { I } _ { \nabla } ( \boldsymbol { D } , \boldsymbol { R } ^ { j } ) \quad \forall \boldsymbol { R } ^ { j } \in \mathcal { R } . $$ Graph projection $\boldsymbol { \mathscr { E } }$ visualisation. Let $\pmb { A }$ be a $m \times m$ matrix, with $m$ the number of features, of the elements $\left[ a _ { i j } \right]$ such that $$ a _ { i j } = 1 - \prod _ { k = 1 } ^ { n } ( 1 - p _ { k i } \cdot p _ { k j } \cdot q _ { k } ) \quad \forall i , j \in \{ 1 , \ldots , m \} $$ $\pmb { A }$ is then normalised such that the sum of all its elements is 100, to obtain $A ^ { \prime }$ : $$ A _ { i j } ^ { \prime } = \frac { A _ { i j } } { \sum _ { i , j = 1 } ^ { m } A _ { i j } } \cdot 1 0 0 \quad \forall i , j \in \{ 1 , \dots , m \} $$ $A ^ { \prime }$ is the adjacency matrix of a weighted and undirected feature graph, which can be visualised to examine the feature interactions within the rule set $\mathcal { R }$ over the dataset $\mathcal { D }$ . According to the proposed projection strategy, the product $p _ { k i } \cdot p _ { k j } \cdot q _ { k }$ in Equation (3) captures the joint relevance of features $v _ { i }$ and $v _ { j }$ with respect to rule $R ^ { k }$ , scaled by $q _ { k }$ to also account for the relevance of the rule. These contributions are aggregated across all rules in a multiplicative manner, making the overall score more sensitive to instances where two features exhibit high joint relevance in at least one relevant rule, in contrast to simple summation. In fact, in rule sets, it is not expected that two features interact as strongly across all rules, and strong interactions can be noteworthy, even if infrequent. Moreover, this projection strategy generates self-edges $a _ { i i }$ for each feature $i$ in $\{ 1 , \ldots , m \}$ , quantifying its individual contribution across all rules. These selfedges also account for instances where a feature appears alone in a rule, which are often crucial to the rule set but are lost in most projection strategies. Class-specific graph projection. A feature graph specific for a given class $t \in \tau$ can be constructed by considering only rules having the given class as consequent. Let $\mathcal { R } _ { i }$ be the subset of rules in $\mathcal { R }$ with target $t _ { i }$ , i.e. $\mathcal { R } _ { i } = \{ R ^ { k } \ |$ $R ^ { k } \in \mathcal { R } \land T ^ { k } = t _ { i } \}$ . $\pmb { A } _ { i } ^ { \prime }$ is the adjacency matrix of the feature graph defined as in Equation (3) but where the matrix $_ { P }$ and the vector $\mathbf { \pmb q }$ are defined over the rule set $\mathcal { R } _ { i }$ rather than $\mathcal { R }$ . Graph distance. The distance between two graph representations can be computed as the distance between the respective adjacency matrices $\pmb { A } _ { 1 } ^ { \prime }$ and ${ \pmb A } _ { 2 } ^ { \prime }$ : $$ d ( A _ { 1 } ^ { \prime } , A _ { 2 } ^ { \prime } ) = \lVert A _ { 1 } ^ { \prime } - A _ { 2 } ^ { \prime } \rVert _ { F } $$ where the Frobenius norm $\| A \| _ { F }$ of a matrix $\pmb { A }$ is given by: $$ \| A \| _ { F } = \sqrt { \sum _ { i , j = 1 } ^ { n } | A _ { i j } | ^ { 2 } } $$ Feature importance. A feature importance score can be computed for features in $\nu$ as the degree centrality of the nodes of the graph defined by $A ^ { \prime }$ . Specifically, the importance of $v _ { i }$ is given by the sum of the elements in the $i$ -th row of $A ^ { \prime }$ : $$ \mathrm { I m p o r t a n c e } ( v _ { i } ) = \sum _ { j = 1 } ^ { m } A _ { i j } ^ { \prime } $$ This metric aggregates contributions from both self-edges and edges with other features, capturing both the independent and combined feature contributions. Relevance metrics. Feature and rule relevance metrics, proposed in LLM and adopted in this study, leverage the concepts of error and covering based on the fraction of data samples assigned to a class and satisfying a rule. A data sample $\scriptstyle { \mathbf { x } } _ { s }$ satisfies a rule $R ^ { k }$ if all its conditions are true for ${ \bf { \sigma } } _ { x }$ , i.e., $$ \begin{array} { r } { \pmb { x } _ { s } \Vdash = R ^ { k } \quad \iff \quad c _ { 1 } ^ { k } ( \pmb { x } _ { s } ) \wedge c _ { 2 } ^ { k } ( \pmb { x } _ { s } ) \wedge \hdots \wedge c _ { q } ^ { k } ( \pmb { x } _ { s } ) } \end{array} $$ where $c _ { i } ^ { k } ( { \pmb x } _ { s } )$ denotes the evaluation of condition $c _ { i } ^ { k }$ on sample ${ \bf { \sigma } } _ { x }$ . The subset $\mathcal { D } ^ { k }$ of $\mathcal { D }$ satisfying the conditions $C ^ { k }$ of $R ^ { k }$ can be defined as: $$ { \mathcal { D } } ^ { k } = \{ { \pmb x } _ { s } \in { \mathcal { D } } \mid { \pmb x } _ { s } \mid = R ^ { k } \} $$ Additionally, let $\mathcal { D } _ { i }$ represent the subset of all samples in $\mathcal { D }$ that are assigned to a given target class $t _ { i }$ , and let $\mathcal { D } _ { i } ^ { \prime }$ be the subset of samples not assigned to $t _ { i }$ : $$ \mathcal { D } _ { i } = \{ \pmb { x _ { s } } \in \mathcal { D } \mid y _ { s } = t _ { i } \} \quad \mathcal { D } _ { i } ^ { \prime } = \{ \pmb { x _ { s } } \in \mathcal { D } \mid y _ { s } \neq t _ { i } \} , $$ Then, covering can be defined as the proportion of samples assigned to the correct class $T ^ { k } = t _ { i }$ that satisfy the rule $R ^ { k }$ , while error is the proportion of samples assigned to a different class that satisfy the rule $R ^ { k }$ , i.e., $$ \mathit { c o v e r i n g } ( R ^ { k } ) = \frac { | \mathcal { D } ^ { k } \cap \mathcal { D } _ { i } | } { | \mathcal { D } _ { i } | } \quad \mathit { e r r o r } ( R ^ { k } ) = \frac { | \mathcal { D } ^ { k } \cap \mathcal { D } _ { i } ^ { \prime } | } { | \mathcal { D } _ { i } ^ { \prime } | } $$ and rule relevance can be defined as: $$ \begin{array} { r } { \mathcal { I } _ { \nabla } ( \mathcal { D } , R ^ { k } ) = \mathrm { c o v e r i n g } ( R ^ { k } ) \cdot ( 1 - \mathrm { e r r o r } ( R ^ { k } ) ) } \end{array} $$ Moreover, adapting from [6], let $R _ { - h } ^ { k }$ be the rule obtained from $R ^ { k }$ by removing the conditions on feature $v _ { h }$ , i.e., $$ { R _ { - h } ^ { k } } = ( C _ { - h } ^ { k } , T ^ { k } ) , \quad { C _ { - h } ^ { k } } = \{ c ^ { k } \mid c ^ { k } \in C ^ { k } \land V ( c ^ { k } ) \neq v _ { h } \} $$ where $V ( c ^ { k } )$ denotes the feature over which the condition is applied. Then, feature relevance can be computed as the increase in error to the rule as a result of removing the conditions over the feature: $$ \mathcal { Z } _ { \sharp } ( \mathcal { D } , v _ { h } , R ^ { k } ) = ( \mathrm { e r r o r } ( R _ { - h } ^ { k } ) - \mathrm { e r r o r } ( R ^ { k } ) ) \cdot \mathrm { c o v e r i n g } ( R ^ { k } ) $$ Other rule relevance metrics include support, confidence, and lift, while an alternative feature relevance score is impurity gain (all implemented in our code). # 4 Evaluation strategy # 4.1 Feature graph evaluation on synthetic dataset The proposed approach was first evaluated on synthetic datasets to showcase the advantages of graph feature representation with respect to feature importance scores, particularly in terms of the ability to differentiate between features that are predictive of the target independently or combined. Three types of datasets were generated: (1) with all relevant features independently predictive of the target, (2) with all relevant features predictive to the target when combined, and (3) with a mix of independently and combined predictive features. Each dataset comprised 2000 instances and 8 features, with relevant features varying from 2 to 6, and 10 datasets were generated per configuration using different random seeds. All features were uniformly sampled in the range [0,1]. Independent predictive features were obtained by assigning a set of intervals in the range [0,1] to each feature. For each data sample, the target was set to 1 if the value of at least one feature fell within its corresponding predefined interval. In contrast, combined predictive features were generated by setting the target variable to 1 if the majority of the feature values in the data sample exceeded a given threshold. These two strategies were also combined to obtain datasets with some independent and some combined relevant features. For further implementation details, refer to the GitHub repository for reproducing the synthetic datasets. The rule-based classifier adopted in this evaluation was DT, with experiments also conducted using LLM, yielding similar results (not shown). DTs were trained using 5-fold nested cross-validation with hyperparameter tuning and converted to rule sets by translating each path from the root to the leaves into if-then rules. Feature graphs were constructed from these rule sets using the proposed method, with adjacency matrices visualised as heatmaps. Gini importance was also computed, serving as a standard feature importance metric for comparison. # 4.2 Feature graph evaluation on benchmark datasets The potential of the proposed approach to identify relevant features and their interactions was evaluated on public datasets, primarily sourced from the clinical domain, the main application target of this method. To ensure broader applicability, datasets from other domains were also included. The selected datasets were diverse in the number of instances (from 100 to 5000), number of features (from 4 to 240), number of classes (from 2 to 10), and feature types (categorical, continuous, and mixed). Two datasets were analysed in depth. The Pima Indians Diabetes dataset3, with 768 medical profiles and 8 clinical features for diabetes detection, was used as a binary classification case study to examine and compare feature contributions across models. The Breast Tissue dataset4, comprising 106 instances with 9 attributes of electrical impedance measurements of freshly excised breast tissue samples, was used for a multi-class classification case study to detect class-specific feature interactions. For these datasets, rule sets were generated according to four strategies: LLM $^ { 5 }$ , DT, RIPPER, and a MultiLayer Perceptron (MLP) with rule extraction using a CART implementation available from the PsyKE library [24]. The fidelity of the extracted rule sets was evaluated by computing accuracy and F1-score compared to the black-box, and the number of extracted rules was set to 15 to balance readability and fidelity. # 4.3 Feature selection performance and robustness on benchmarks The effectiveness of feature centrality as a measure of feature importance, assessed by its ability to identify the top- $k$ features in a dataset, was evaluated across 15 benchmark datasets. Our graph-based importance score was compared against three feature importance metrics: permutation importance, Gini importance, and average SHAP values. DTs were trained using 5-fold nested crossvalidation to derive rule sets and generate feature graphs. Evaluation criteria encompassed both performance and robustness of the feature importance scores. Performance evaluation involved selecting the top 5 and top 10 features identified by each metric, training decision trees on these features, and computing prediction accuracy. Robustness was assessed by calculating the average pairwise Spearman’s correlation across cross-validation folds for each importance score. # 5 Results # 5.1 Feature graph evaluation on synthetic dataset The evaluation of the proposed approach on synthetic datasets underscores the superiority of a graph-based feature representation over a one-dimensional feature importance score for uncovering the collective roles of features in class prediction. Figure 1 presents the average adjacency matrix of feature graphs built on rule sets derived for synthetic datasets with varying numbers of relevant features and feature configurations. For independently predictive features (first row), the heatmap reveals heavy weights on the diagonal (self-edges), indicating that each rule primarily relies on a single highly predictive variable. Connections between different features show weaker weights, suggesting minimal influence from other variables. In contrast, combined predictive features (third row) exhibit heavier weights on edges connecting different features, indicating collaborative predictive power among multiple features. While Gini importance effectively distinguishes relevant from irrelevant features, it fails to differentiate between features that are predictive independently or combined. Intermediate scenarios, comprising a mix of these two types of features, were also explored, yielding similar insights. These results underscore the advantage of our feature graph approach, providing a more nuanced representation of feature interactions. # 5.2 Feature graph evaluation on benchmark datasets Rule set visualisation. Feature graphs constructed from rule sets over the Pima Indians Diabetes dataset illustrate the valuable insights this representation can provide on feature interactions in a clinical setting. Figure 2 displays feature graphs corresponding to rule sets obtained from four learning schemes (LLM, DT, RIPPER, MLP $^ +$ CART), where node size is proportional to its centrality (our defined feature importance score). Node centrality reveals that all models identify glucose levels (G120) as the most crucial feature for diabetes classification, which aligns with the clinical understanding that diabetes is characterised by elevated blood sugar. Age is identified as the second most important risk factor by DT and MLP and the third by LLM and RIPPER. Body mass index (BMI) and family history of diabetes (DPF) are also consistently leveraged across all models, confirming known diabetes risk factors. In fact, according to the NIH, the top three risk factors for diabetes are being overweight or obese, being 35 years or older, and having a family history of diabetes6. The number of pregnancies is considered by LLM and RIPPER, reflecting the role of gestational diabetes. Other features, which have positive correlations with the target and appear in the rule sets, show weak or null edge weights and very low centrality. This indicates that these features do not substantially contribute to the rules and their removal does not strongly affect the model predictive ability. Fig. 1: Adjacency matrices constructed according to the proposed approach and Gini importance indices obtained from decision trees trained on synthetic datasets comprising a number of relevant features ranging from 2 to 6, which are predictive of the target class either independently or combined. Feature interactions reveal a heavy edge between Glucose and Age in DT, and MLP, indicating that age is predictive of diabetes when glucose levels are also elevated. Similarly, a strong edge between BMI and Glucose suggests that BMI’s predictive power is enhanced when considered alongside glucose levels. This aligns with established protocols that predict high diabetes risk when both features are elevated [12,25]. Additionally, LLM and RIPPER identify interactions between Glucose and the number of pregnancies. Minor interactions not involving Glucose are found in DT and RIPPER between DPF and Age, and between BMI and Age, suggesting that the impact of BMI on diabetes risk varies with age and reflecting the established association between obesity and age as risk factors for diabetes [17]. Overall, the analysis shows that Glucose is most predictive, while other factors (BMI, Age, DPF, pregnancies) can be predictive, but primarily in combination with Glucose. Understanding these combined predictive values of clinical features is crucial for determining which patient information should be collected and prioritised to enhance diagnostic accuracy. Fig. 2: Evaluation on the Pima Indians Diabetes dataset, comprising 8 features: pregnancies (P), glucose at 2 hours (G120), blood pressure (BP), skin thickness (ST), insulin at 2 hours (I120), body mass index (BMI), diabetes pedigree function (DPF), and age (A). Four rule sets were generated using LLM, DT, RIPPER, and MLP with rule extraction via CART. Feature graphs are visualised with node size proportional to its centrality (self-edges not shown). Rule sets comparison. The pair-wise distance calculated between the adjacency matrices of the feature graphs in Figure 2 shows that the two most similar models (in terms of feature contribution to extracted rules) are DT and MLP, whose graphs are dominated by Glucose-BMI and Glucose-Age interactions. In contrast, the most different models are DT and RIPPER, which differ in the Glucose-Age, and Glucose-BMI interactions (stronger in DT) and Glucosepregnancies (stronger in RIPPER). Evaluating the distance between rule sets, facilitated by our graph representation, can be beneficial in various scenarios, particularly for ensuring consistency and continuity in decision-making. When updating a rule-based expert system with a new, more predictive rule set, selecting the one with the most similar feature graph to the existing system ensures that predictions and explanations remain consistent. In clinical settings, this supports the concept of continuity of care, defined as the consistency of healthcare events experienced by individuals over time and across different providers [7]. The pair-wise distance calculated between feature graphs constructed from DTs of varying depths trained on the Pima dataset demonstrated high similarity, suggesting robustness against over-fitting. These findings confirm that the proposed approach, by evaluating a feature contribution in terms of the quality of rules with that feature removed, effectively handles scenarios where less relevant features are included, arising when a model grows complex and over-fits the data, thus reducing the need for extensive hyperparameter tuning, unfeasible in some resource-constrained scenarios. Moreover, in this experiment, feature centrality rankings remained more stable across different tree depths compared to Gini importance, with an average Spearman’s correlation of 0.98 versus 0.94, confirming the greater ranking robustness offered by our approach. Class-specific rule set visualisation. Class-specific feature graphs were generated for the Breast from rule sets derived from DT and LLM (Figure 3). For both models, predicting fibroadenoma and mastopathy proved the most challenging, requiring nearly all features and exhibiting multiple feature interactions. This is unsurprising, as these two tissues can be hardly distinguished even in histopathology and are often grouped under the same class. In contrast, predicting adipose tissue relied solely on the length of the spectral curve (P), without any feature interactions. In fact, this feature indicates the overall complexity and variability of the tissue’s impedance profile, distinguishing homogeneous tissues like adipose from others with more complex impedance spectra. For predicting carcinoma, the phase angle at 500 KHz (PA500) emerged as the most important feature. Indeed, carcinomas are characterised by higher cellularity and different extracellular matrix compositions and exhibit distinct phase angles. This finding aligns with literature linking phase angle to survival in advanced cancer patients [9]. Both models identified interactions between PA500 and impedance (DA) and/or impedivity (I0), while DT detected an interaction between impedivity and the area under the impedance spectrum (AREA), all features that provide further information on the tissue’s electrical properties. In predicting glandular tissue, both models utilised the impedance (DA) - Max spectrum (MAX IP) feature combination, with LLM also incorporating phase angle. In connective tissue prediction, impedivity was crucial, either independently (in LLM) or in conjunction with the spectral curve length (in DT). Overall, the two models detected similar interactions, with variations likely due to the different learning paradigms, the scarcity of data samples, and the redundant nature of features. Nonetheless, these explorations offer valuable insights into what sets of measurements should be collected to distinguish a given diagnostic group from others. # 5.3 Feature selection performance and robustness on benchmarks Classification accuracy results for 15 benchmark datasets (Table 1) show that when selecting the top 5 features, both our graph-based importance score and permutation importance were highly effective, achieving 7 and 8 top accuracy scores, respectively. In contrast, Gini importance excelled only once, while SHAP never did. For the top 10 features, centrality and permutation importance again performed best, with 8 and 7 top scores, respectively, while Gini and SHAP each excelled in only two datasets. Overall, the proposed approach performs comparably to permutation importance, a well-established metric, and surpasses Gini and SHAP. Evaluation of feature rank robustness demonstrates that the Carcinoma Fibro-adenoma Mastopathy Glandular Connective Adipose DA HFSPA500 DA HFSPA500 DA HESPA500 DA HFS PA500 DA HFSPA500 DA HFSPA500 AREA AREA AREA AREA AREA AREA ADA 中 ! 10 10 10 A/DA A/DA A/DA A/DA A/DA MAXIP DR P MAXIPDR P MAXIP DR MAXIP DR P MAXIPDR MAXIP DR (a) Feature graphs from decision tree Carcinoma Fibro-adenoma Mastopathy Glandular Connective Adipose D HFSPA500 DA HESPA500 DA HFS PA500 DA HFS ·PA500 DA HFSPA500 DA HFS PA500 AREA AREA AREA AREA AREA AREA 1 ! 10 10 10 A/DA A/DA A/DA A/DA A/DA A/DA MAXIP DR P MAXIP T MAXIP DR MAXIPDR P MAXIP DR MAXIPDR (b) Feature graphs from logic learning machine proposed approach is the most stable, achieving 8 top scores across 15 datasets. Gini importance follows with 4 top scores, SHAP with 2, and permutation with 1, indicating its sensitivity to data splits. Therefore, while the proposed importance metric performs comparably to permutation in accuracy, it excels in robustness, making it the most reliable metric across the evaluated benchmarks by effectively mitigating the impact of data variability.
In domains where transparency and trustworthiness are crucial, such as healthcare, rule-based systems are widely used and often preferred over black-box models for decision support systems due to their inherent interpretability. However, as rule-based models grow complex, discerning crucial features, understanding their interactions, and comparing feature contributions across different rule sets becomes challenging. To address this, we propose a comprehensive framework for estimating feature contributions in rule-based systems, introducing a graph-based feature visualisation strategy, a novel feature importance metric agnostic to rule-based predictors, and a distance metric for comparing rule sets based on feature contributions. By experimenting on two clinical datasets and four rule-based methods (decision trees, logic learning machines, association rules, and neural networks with rule extraction), we showcase our method's capability to uncover novel insights on the combined predictive value of clinical features, both at the dataset and class-specific levels. These insights can aid in identifying new risk factors, signature genes, and potential biomarkers, and determining the subset of patient information that should be prioritised to enhance diagnostic accuracy. Comparative analysis of the proposed feature importance score with state-of-the-art methods on 15 public benchmarks demonstrates competitive performance and superior robustness. The method implementation is available on GitHub: https://github.com/ChristelSirocchi/rule-graph.
[ "cs.LG", "cs.AI" ]
# 1 Introduction Bloom filters and other bit vector filters are used widely in database management systems to perform early data reduction [4–7, 16]. Bloom filters provide an efficient way to probabilistically remove rows early, reducing the number of rows participating in further processing and improving query performance. Typically, bit vector filters, such as Bloom filters, are built in hash joins when building the hash join table, and applied to table scans on the probe side of that hash join [8, 11, 19]. If the probe side consists of a subtree of operator nodes, a Bloom filter can often be pushed down through those operators to the table scans. When pushed through an intermediate operator, a Bloom filter can also reduce the number of rows participating in that intermediate operator, magnifying the improvements observed by using Bloom filters [5, 6]. Given that Bloom filters applied to table scans reduce the number of rows produced by a table scan, intentional consideration of this revised row estimate during optimization should lead to potentially better query plans. It was illustrated in [8] that when Bloom filters are included in a plan, the lowest cost join order can be different from the lowest cost join order when Bloom filters are absent. Since Bloom filters are typically added during post-processing (e.g., [5, 6]), after the optimal query plan structure has already been determined, the optimal plan that includes Bloom filters may not necessarily be found. While [8] described how to include Bloom filters as a transformation rule in a top-down query optimizer [9, 10], we do not know of any work that describes how to include Bloom filters in a bottom-up optimizer [21, 23]. We argue that by revising the cardinality estimate for scan plans that include Bloom filters, and incorporating a cost model for building and applying Bloom filters, a bottom-up optimizer can also produce query plans with better join ordering, better join methods, and better re-partitioning strategies than simply adding Bloom filters in a post process. An example is shown in Figure 1, illustrating the join produced by our system for TPC-H [24] query 12 with and without including Bloom filters in bottom-up cost-based optimization (CBO). When Bloom filters are included in costing (BF-CBO), the planner explores a plan that applies a Bloom filter to the table orders, which has a considerably lower estimated row count than without the Bloom filter. A better join-input ordering is then selected, resulting in far fewer rows needing to participate in the query’s join and reducing the overall runtime. Figure 1: The join order for TPC-H query 12 without including Bloom filters during bottom-up CBO (panel a) uses the table o:orders with 150M rows as the build side of a hash join (HJ). The l:lineitem table with an estimated 2.8M rows and actual 3.1M rows after local predicate filtering is broadcast (BC) to each of the 48 computing threads used in this example. A Bloom filter is not applied during post-processing in this case because the probe side is a foreign key column referencing an unfiltered primary key column on the build side; a Bloom filter cannot filter any probe side rows in this case. When including Bloom filters in CBO (panel b), the joininput ordering is reversed so a Bloom filter can be built on l:lineitem and applied to o:orders, significantly reducing its estimated row count to 6.4M rows. The reduced row estimate for o:orders lowers the cost of this join-input ordering. The planner then selects the lowest cost plan as depicted, which maintains the new join-input ordering and redistributes (RD) both sides. The query runs with $4 9 . 2 \%$ lower latency when including Bloom filters in costing. At a high level, our method adds additional sub-plans to scan nodes where single-column Bloom filters can be applied. These Bloom filter scan sub-plans are costed and given new cardinality estimates. When evaluating joins between two relations, the additional Bloom filter sub-plans are combined with all compatible sub-plans from the other relation, and a unique cost and cardinality estimate is computed for each combination. This allows a completely different query plan, relative to a post-processing application of Bloom filters, to be built by a bottom-up optimizer. The optimizer search space is necessarily increased by the inclusion of additional Bloom filter sub-plans. Evaluating additional sub-plans is a common problem that must be addressed by the optimizer when adding new operators to a DBMS. One method we adopt is to impose search-space-limiting heuristics. However, one nuance that applies uniquely to Bloom filter sub-plans, and which we will describe further in Section 3, is that their row counts cannot be estimated until the complete set of tables on the build side of the join that provides the Bloom filter is known. If handled in a naïve way, this means the extra Bloom filter sub-plans must be maintained with unknown cost until computing the join that provides the required Bloom filter build relation. This naïve approach leads to an explosion of the search space and an increase in optimization time that is prohibitive. Our contributions in this paper include 1) introducing a unique property of Bloom filter sub-plans in the context of bottom-up CBO that triggers their special handling, and 2) proposing a two-phased bottom-up approach that minimizes the increase in optimizer search space. Our method allows Bloom filters to be considered during bottom-up CBO, which enables the optimizer to find different join orderings that may provide an opportunity to apply more Bloom filters and provide better predicate transfer. Next, we describe related work (Section 2). In the remaining sections of the paper, we describe our method in detail (Section 3, with notations listed in Table 1), demonstrate our results (Section 4), and conclude with a discussion (Section 5). # 2 Related work The idea that Bloom filters can be used for predicate transfer across multiple joins is described in [25] (Pred-Trans). The authors describe that a predicate on one table $( T _ { 1 } )$ can be transferred to a joining table $\left( T _ { 2 } \right)$ through a Bloom filter. $T _ { 2 }$ can realize the filtering effects of the predicate on $T _ { 1 }$ by applying the Bloom filter built from the prefiltered $T _ { 1 } . T _ { 2 }$ can further transfer that predicate’s filtering effects to yet another table (e.g., $T _ { 3 }$ ) by using another Bloom filter, and so on. This idea has its roots in earlier work showing that semijoins can be used to reduce the size of relations prior to joins [2], and minimize data transfer in a distributed context [1, 3]. This work indicates that finding the best data reduction schedule is its own optimization problem. The authors of [5, 6] extended this line of work by using bit vector filters instead of semi-joins for row reduction. They showed that the positioning and ordering of bit vector filters applied to a fixed execution plan could affect the amount of data reduction observed; again indicating the need for optimization of a reduction schedule. In the Pred-Trans paper, Bloom filters were applied as a prefiltering step before any joins were computed. However, the arrangement of Bloom filter application was determined heuristically—they arranged the tables from small to large, then applied all possible Bloom filters in one forward pass and one backward pass. The best join order was found independently after the input tables had been reduced. This approach was beneficial, but likely missed the best arrangement of Bloom filters for optimal predicate transfer, as no costed optimization of the reduction schedule was performed. Our approach differs in that we directly consider the estimated selectivity and cost of applying Bloom filters when building the join graph, so that effective join orders can be found that inherently consider the predicate transferring ability of Bloom filters. In [26], the authors study the join order of star-schema queries when Bloom filters are built on multiple dimension tables and applied to a single fact table. They found that with these Bloom filters applied, the query plans were robust to the join order of the multiple dimension tables; each join order tested had similar costs. Ding et al. extended this finding to snowflake-schema queries, and demonstrated that the best join order for a query can be different when Bloom filters are applied [8]. They proposed a cost model for Bloom filters and implemented their Bloom filter-aware optimization as a transformation rule into Microsoft SQL Server [18], a top-town, Volcano/Cascades style query optimizer. Their optimizer uses heuristics to detect snowflake queries to trigger their Bloom filter transformation rule. Our work applies to a bottom-up query optimizer, as opposed to a top-down one, and applies to all query schemas—not just starschema or snowflake-schema queries. It is general-purpose in its construction because we incorporate the cost and cardinality estimate of Bloom filters directly when building each query plan. # 3 Methods # 3.1 Preliminaries and Naïve approach The basic principle we follow for including Bloom filters in bottomup query optimization is to maintain information about the Bloom filter in the nodes to which those Bloom filters will be applied. The well-known process of bottom-up CBO starts by evaluating the cost of all supported methods of realizing the base relations in the query, i.e., the different ways of scanning the required database tables. Each of the different ways of accessing those relations can be thought of as a sub-plan; during bottom-up CBO, a list of sub-plans for each relation (including join relations and base relations) is maintained—each sub-plan in the relation’s plan-list represents the lowest cost method with a specific set of properties for realizing the corresponding relation. During the evaluation of a join between two relations, the sub-plans from one relation are tested in combination with the sub-plans of the other relation, their join cost is computed, and only the lowest cost sub-plans for the join relation are kept for the next level of bottom-up CBO (where the sub-plans of that join relation may be combined with the sub-plans of other relations). Higher cost sub-plans are pruned away. This pruning helps to limit the search space of sub-plan combinations that the optimizer needs to evaluate at higher levels of bottom-up CBO. In our approach, we start by adding new Bloom filter sub-plans to base relations (or table scans), which include additional information about the Bloom filter(s) that could be applied to those scans. All Bloom filter information is included on the apply side. These additional Bloom filter sub-plans are then included during join evaluation when the sub-plans from one relation are combined with those of another. The Bloom filter information becomes an additional property, and sub-plans with higher costs can be pruned according to a common property set [15]. Thus, Bloom filter subplans are maintained similarly to how interesting join orders can be supported [22]. We consider adding Bloom filter sub-plans on scan nodes only because this ensures that the final plan will push down Bloom filters as far as possible to those scan nodes. This means we must build our Bloom filters using hash keys derived from values in single columns, rather than supporting hash keys based on values across multiple columns. So, when a join consists of multiple join columns, instead of building a multi-column Bloom filter we plan for, and build, separate single-column Bloom filters. Additionally, we only consider building Bloom filters on the build side of hash join nodes. While building in other nodes could be supported, following this convention allows us to ensure Bloom filters will be fully built before they are used on the probe side. The Bloom filter property we add to scan node sub-plans can propagate up through joins. It may be present in the sub-plans created for join relations, or it can be removed if the joined relation resolves the Bloom filter—that is, if the joined relation provides the required build side of the Bloom filter sub-plan. The Bloom filter property differs from other properties, like sort order, in one important sense, however. That is, the cardinality (or estimated row count) of a scan with a Bloom filter applied depends on the set of relations involved on the build side of the hash join that creates the Bloom filter. Table 1: Table of notations To explain this dependency, we first note that the cardinality of a relation $R _ { 0 }$ with a Bloom filter built from a single relation $R _ { 1 }$ (denoted $| R _ { 0 } { \hat { \times } } R _ { 1 } |$ and shown in Figure 2a) can be estimated as the cardinality of the semi-join of $R _ { 0 }$ with $R _ { 1 }$ (denoted $\left| R _ { 0 } \times R _ { 1 } \right| )$ , plus some Bloom filter false positive rate. In other words, $| R _ { 0 } { \hat { \times } } R _ { 1 } | \geq$ $\left| R _ { 0 } \ltimes R _ { 1 } \right|$ , where equality occurs if the false positive rate is 0. Then, we note that $| R _ { 0 } \ltimes R _ { 1 } | \geq | R _ { 0 } \ltimes ( R _ { 1 } , R _ { 2 } , . . . , R _ { n } ) |$ if the semi-join clause is between $R _ { 0 }$ and $R _ { 1 }$ , because any joins that $R _ { 1 }$ has with other relations before joining with $R _ { 0 }$ may remove some distinct elements of the joining $R _ { 1 }$ column, and reduce the number of elements in a Bloom filter built on $R _ { 1 }$ . This, in turn, would reduce the number of rows coming out of a scan that applies a Bloom filter on $R _ { 0 }$ . So, to estimate the cardinality, and therefore cost, of a Bloom filter sub-plan we must know the set of relations that appear on the build side of the hash join. As shown in Figure 2, the cardinality of $R _ { 0 }$ with a Bloom filter built from $R _ { 1 }$ is different when $R _ { 1 }$ is first joined to $R _ { 2 }$ . This dependency poses a problem for a bottom-up optimizer, where the set of relations appearing on the build side of a join is not generally known a priori. Because plans are built bottom up, the cardinality of a Bloom filter sub-plan on $R _ { 0 }$ cannot be known until the optimizer knows the set of relations to which that sub-plan will be joined. A naïve solution may maintain several uncosted sub-plans with unresolved Bloom filter information. These uncosted, unresolved sub-plans would inevitably be combined with relations that do not provide the build side of the Bloom filter and, while uncosted, these sub-plans cannot be pruned, so the number of sub-plans that need to be maintained would grow exponentially with each join that does not resolve the Bloom filter. A Bloom filter sub-plan can only be fully costed when the Bloom filter becomes resolved, i.e., when evaluating a join where the requisite relation appears on the build side. It is only then that the cardinality of Bloom filter subplans can be estimated, a necessarily recursive process in which the sub-plan is traversed to the leaf table scan whose cardinality and cost must be computed with respect to the now-known set of relations on the hash join build side. The computed cardinality of the leaf table scan, in turn, influences the cardinality and cost of any intermediate plan nodes back up to the resolution node. We found a naïve approach like this led to prohibitive optimization times: for example, a 3-table join query took $2 8 ~ \mathrm { m } s$ to optimize, a 4-table join query took $3 7 5 ~ \mathrm { { m s } }$ , and a 5-table join query took $5 6 ~ s .$ . A 6-table join did not finish its optimization in more than $3 0 ~ \mathrm { m i n }$ , and we did not wait longer. Figure 2: The cardinality of a scan on $R _ { 0 }$ after applying a Bloom filter built from a column of relation $R _ { 1 }$ (denoted by the dashed arrow labeled $\mathbf { B F } ( R _ { 1 } ) )$ depends on the set of relations on the build side of the hash join in which that Bloom filter is created. The cardinality of $R _ { 0 }$ is $| R _ { 0 } { \hat { \times } } R _ { 1 } |$ when the build side consists solely of relation $R _ { 1 }$ , as in panel (a). So in this case, the Bloom filter sub-plan for $R _ { 0 }$ has property $\delta =$ $\{ R _ { 1 } \}$ ; it requires the relation $R _ { 1 }$ be joined to it for the Bloom filter to be resolved. When the build side includes both $R _ { 1 }$ and $R _ { 2 }$ (panel b), the cardinality of $R _ { 0 }$ after the Bloom filter is applied becomes $| R _ { 0 } { \hat { \times } } ( R _ { 1 } , R _ { 2 } ) |$ , which may be lower than that in panel (a). In this case, the Bloom filter sub-plan for $R _ { 0 }$ has property $\delta = \{ R _ { 1 } , R _ { 2 } \}$ ; it requires both relations $R _ { 1 }$ and $R _ { 2 }$ to be joined to it for its Bloom filter to be resolved. Note that the build (inner) side of the hash join is depicted on the right side in our convention. # 3.2 Our approach: BF-CBO The key to preventing this exponential growth in the search space is to delay planning until pruning is possible. That way, numerous uncosted plans need not be maintained. We achieve this property of delaying planning until pruning is possible with a two-phase bottom-up optimization approach. At a high level, this involves the following steps, which we will describe in detail in the following sections. (1) Marking Bloom filter candidates. We first identify which base tables are suitable candidates for applying a Bloom filter and attach the required Bloom filter information to those tables. (2) First bottom-up phase. We compute a first bottom-up pass in which all the valid join combinations are decided, and all sets of relations that appear on the build-side of a join with Bloom filter candidate relations are identified (we can denote one such set as $\delta = \{ R _ { a } , R _ { b } , . . . , R _ { z } \}$ , the required build-side relations for a Bloom filter sub-plan to be resolved). (3) Costing Bloom filter sub-plans. We create new Bloom filter scan sub-plans on base tables and estimate their cardinality and cost according to $\delta$ . (4) Second bottom-up phase. We compute a second bottom-up pass in which all intermediate sub-plans are fully costed and planned, with any Bloom filter sub-plans adherent to join order restrictions set out by the assumptions inherent in $\delta$ . # 3.3 Marking Bloom filter candidates Our solution begins after the optimizer has estimated the cardinality of each base relation (i.e., table scan) and added sub-plans for the scan of those base relations, but before the optimizer has created any join sub-plans or evaluated the cost of any join combinations. We first identify all Bloom filter candidates that can be applied to base relations based on the join clauses in the query. A Bloom filter candidate includes information identifying the join clause, the Bloom filter build table and column, as well as the apply column, and it is attached as additional information to the base relation to which the associated Bloom filter can be applied (no information is attached to the build-side relation). Multiple Bloom filter candidates from different join clauses can be attached to the same relation. Each Bloom filter candidate has an initially empty list, $\Delta = [ ]$ , of required build-side relation sets that become populated during the first bottom-up pass (e.g., $\Delta = [ \delta _ { 0 } , \delta _ { 1 } , . . . , \delta _ { n } ] )$ and which identifies all valid Bloom filter scan sub-plans. Bloom filter candidates should be thought of as being a property of the relation to which a Bloom filter might be applied, rather than as a property of the many sub-plans that can realize that relation. We use some heuristics to help us limit the total number of Bloom filter candidates, which can reduce the number of additional sub-plans that must be evaluated during CBO. These heuristics are implemented throughout our method, so we’ll enumerate them in the text and summarize them in Section 3.10. First, for a pair of relations in a join clause, we only include a Bloom filter candidate on the larger of the two tables (Heuristic 1), as it is often more likely the Bloom filter has a greater filtering capacity in this configuration. If we have a multi-way equivalence clause, then we only consider building a Bloom filter from the smallest table and applying it to the larger tables. Second, if the estimated number of rows on the apply-side table is below a threshold, it may not be worthwhile to apply a Bloom filter anyway, so we do not include a Bloom filter candidate in that case (Heuristic 2). Further heuristics are applied during bottom-up optimization. We also restrict applying any Bloom filter candidates if the build and apply column will cross a full outer join or an anti join; applying a Bloom filter in these cases could yield incorrect results, so this restriction is not considered a heuristic. Similarly, if a Bloom filter candidate’s build and apply column were to cross a left outer join, we must restrict the apply column from being on the row-preserve (left) side, as that would also yield incorrect results. We will now introduce a running example for each step of our method. Example 3.1. Marking Bloom filter candidates. Consider the example query: SELECT $\star$ FROM 𝑡1, 𝑡2, 𝑡3 WHERE $t 1 . c 2 = t 2 . c 1$ AND $t 2 . c 2 = t 3 . c 1$ AND $t 2 . c 3 < 1 0 0$ ; with the following estimated base relation cardinalities: and $t 2 . c 2$ is a foreign key of $t 3 . c 1$ . For each hashable join clause we may place one Bloom filter candidate (BFC). So for $t 1 . c 2 = t 2 . c 1$ , we place a BFC on 𝑡1 because it has a larger cardinality than 𝑡 2 (Heuristic 1). Similarly, for $t 2 . c 2 =$ 𝑡3.𝑐1 we place a BFC on 𝑡3 because it has a larger cardinality than 𝑡2. In summary, we have the following BFCs: $$ \begin{array} { r } { \bullet t 1 . \mathrm { b f c } _ { 1 } : a = t 1 . c 2 , b = t 2 . c 1 , \Delta = [ \mathbf { \Phi } ] } \\ { \bullet t 3 . \mathrm { b f c } _ { 1 } : a = t 3 . c 1 , b = t 2 . c 2 , \Delta = [ \mathbf { \Phi } ] } \end{array} $$ where $a$ records the apply-side relation and column, and $b$ records the build-side relation and column. # 3.4 First bottom-up phase In the first bottom-up pass, we simulate the process of combining relations as in normal bottom-up CBO. However, instead of costing any sub-plans, we only populate the list of $\delta$ relation sets, $\Delta$ , that are observed during this process. For example, for a join involving three relations, $R _ { 0 } , R _ { 1 } , R _ { 2 }$ , if we have a Bloom filter candidate applied to $R _ { 0 }$ built from a column of $R _ { 1 }$ , then during this process we may observe these two relation sets: $\delta _ { 0 } = \{ R _ { 1 } \}$ and $\delta _ { 1 } = \{ R _ { 1 } , R _ { 2 } \}$ —as indicated by the join orders in Figure 2; so the list of possible relation sets for that Bloom filter candidate on $R _ { 0 }$ would be $\Delta = \left[ \delta _ { 0 } , \delta _ { 1 } \right]$ . During the first bottom-up pass, we can also prune any $\delta \boldsymbol { s }$ where the Bloom filter candidate join clause consists of a foreign key on the apply side referencing a lossless primary key on the build side (Heuristic 3). When the primary key column is unfiltered and is the column used to build the Bloom filter, then we know that it will not filter any rows on the apply side, so we need not create a Bloom filter scan sub-plan for that scenario. This heuristic is implemented here because we can only determine if the primary key is lossless with respect to this sub-plan once we know the complete set of relations, $\delta$ , to appear on the build side of the join. Example 3.2. First bottom-up phase. Continuing from Example 3.1, during the first bottom-up phase, we’d observe the following ordered join combinations (grouped by the join relations they create), and would populate $\Delta$ for each BFC, if possible. In each case, we defer computing any join sub-plans. Join Relation: $( t 1 , t 2 )$ . • 𝑡 1 JOIN 𝑡 2: Here, 𝑡 1 is the outer relation and 𝑡 2 is the inner relation. $t 1$ has a BFC, namely $_ { t 1 . \mathrm { b f c } _ { 1 } }$ , and the inner relation (𝑡2) supplies the required build column. So we populate $\Delta$ with the inner relations observed for this join pair (i.e., $\delta =$ $\{ t 2 \} ,$ ). The updated BFC is $$ t 1 . { \mathrm { b f c } } _ { 1 } : a = t 1 . c 2 , b = t 2 . c 1 , \Delta = [ \{ t 2 \} ] $$ 𝑡 2 JOIN 𝑡 1: Here, $t 2$ is the outer relation and $t 1$ is the inner relation, so this join pair cannot supply the build column for $t 1 . \mathrm { b f c } _ { 1 }$ because the Bloom filter must be built on the inner (build) side of a hash join. There is nothing to do for this join pair. Join Relation: $( t 2 , t 3 )$ . 𝑡 2 JOIN 𝑡 3: Here, 𝑡 3 has a BFC $\left( t 3 . \mathrm { b f c } _ { 1 } \right)$ ), but it is on the inner side of the join, so its build column cannot be resolved here. There is nothing to do for this join pair. 𝑡 3 JOIN 𝑡 2: Here, 𝑡 3 is the outer relation and the inner is $t 2$ which supplies the required build column. So we populate $\Delta$ as follows: $$ t 3 . \mathrm { b f c } _ { 1 } : a = t 3 . c 1 , b = t 2 . c 2 , \Delta = [ \{ t 2 \} ] $$ Join Relation: $( t 1 , t 2 , t 3 )$ . · $( t 1 , t 2 )$ JOIN 𝑡 3: Here, the inner relation is $t 3$ , and it does not supply the build column of $t 1 . \mathrm { b f c } _ { 1 }$ , so there is nothing to do. 𝑡 3 JOIN $( t 1 , t 2 )$ : Here, the inner relation is $( t 1 , t 2 )$ which supplies the build column of $t 3 . \mathrm { b f c } _ { 1 }$ . We update its $\Delta$ as follows: $t 3 . { \mathrm { b f c } } _ { 1 } : a = t 3 . c 1 , b = t 2 . c 2 , \Delta = \left[ \{ t 2 \} , \{ t 1 , t 2 \} \right]$ • $( t 2 , t 3 )$ JOIN 𝑡 1: Here, the inner relation, $t 1$ , does not supply the build relation for the outer BFC $( t 3 . \mathrm { b f c } _ { 1 } )$ , so there is nothing to do. 𝑡 1 JOIN 𝑡 2, 𝑡3 : Here, the inner relation is $( t 2 , t 3 )$ which supplies the build column of $t 1 . \mathrm { b f c } _ { 1 }$ . We update its $\Delta$ as follows: $$ t 1 . { \mathrm { b f c } } _ { 1 } : a = t 1 . c 2 , b = t 2 . c 1 , \Delta = \left[ \{ t 2 \} , \{ t 2 , t 3 \} \right] $$ In this example, we did not encounter any join pairs where the Bloom filter candidate join clause consists of a foreign key on the apply side referencing a lossless primary key on the build side. So, we did not prune any potential $\delta \mathbf { s }$ due to Heuristic 3. # 3.5 Costing Bloom filter sub-plans After the first bottom-up pass, each Bloom filter candidate should have a list of valid $\delta \mathbf { s }$ . For each of those $\delta \boldsymbol { s }$ , we create a scan subplan that includes the application of a Bloom filter during scanning. We compute the cardinality of this Bloom filter sub-plan using the estimated selectivity of a semi-join of this relation and those in $\delta$ , plus the estimated Bloom filter false positive rate. For example, we may create one Bloom filter sub-plan for the scan of $R _ { 0 }$ with $\delta = \{ R _ { 1 } \}$ with a cardinality estimate of $| R _ { 0 } { \hat { \times } } ( R _ { 1 } ) |$ . We may create another Bloom filter sub-plan for the scan of $R _ { 0 }$ with $\delta = \{ R _ { 1 } , R _ { 2 } \}$ with a cardinality estimate of $| R _ { 0 } { \hat { \times } } ( R _ { 1 } , R _ { 2 } ) |$ , which may have fewer estimated rows but also require more relations on the build side of the hash join on which the Bloom filter will be built. After creating these new Bloom filter sub-plans, we attempt to add them to the base relation’s list of lowest-cost sub-plans (i.e., the relation’s plan-list). During this process, the new Bloom filter sub-plans can be pruned against one another based on the property of $\delta$ , as follows. If a new sub-plan requires more relations in $\delta$ than pre-existing sub-plans in the relation’s plan-list, but it has fewer rows, then it will be kept as an interesting option, regardless of cost. If the new sub-plan requires more relations in $\delta$ , but it does not have fewer rows, then we know the extra required relations in $\delta$ did not provide more filtering capacity and the new sub-plan can be immediately pruned. This immediate pruning helps to limit the search space explored throughout bottom-up optimization. If multiple Bloom filters are candidates on the same relation (originating from different join clauses), then we create Bloom filter scan plans with all possible Bloom filter candidates applied as opposed to testing out sub-plans with various subsets of Bloom filter candidates applied. In other words, we apply all valid candidate Bloom filters on a base table simultaneously, as an additional heuristic to limit the search space (Heuristic 4). We do, however, allow for various combinations of $\delta \mathbf { s }$ when creating new Bloom filter scan sub-plans. The Bloom filter false positive rate can be derived from the number of bits in the Bloom filter array and the number of hash functions used in the Bloom filter. The number of bits in the Bloom filter is determined through an upper bound estimate of the number of distinct values on the Bloom filter build side. The number of hash functions is fixed at two for performance reasons. We remove any sub-plans whose estimated Bloom filter size is beyond a threshold (Heuristic 5). The purpose of this restriction is to limit the size of created Bloom filters so that they can mostly be accommodated by the L2 cache. If a Bloom filter spills beyond this, there will be slowdowns in accessing and probing that Bloom filter, reducing its benefits. We also remove any Bloom filter sub-plans whose estimated selectivity (excluding false positives) is lower than a threshold (Heuristic 6). In this way we only retain Bloom filter candidates that are likely to have a large enough filtering capacity. We model the cost of applying the Bloom filter as a constant value (𝑘) times the number of rows to be filtered to represent the cost of evaluating the Bloom filter hash functions for each row. $k$ is set to be smaller than the cost per row of a regular hash table lookup. We also provide a mechanism to account for the cost of building each Bloom filter, but in practice we found this cost to be negligible, so it is set to zero in our cost model. Example 3.3. Costing Bloom filter sub-plans. Continuing from Example 3.2, after the first bottom-up phase, we have the following Bloom filter candidates: $$ { \begin{array} { r } { - \ a = t 1 . c 2 , b = t 2 . c 1 } \\ { - \ \Delta = [ \{ t 2 \} , \{ t 2 , t 3 \} ] } \\ { \bullet \ t 3 . \mathrm { b f c } _ { 1 } ; } \\ { - \ a = t 3 . c 1 , b = t 2 . c 2 , } \\ { - \ \Delta = [ \{ t 2 \} , \{ t 1 , t 2 \} ] } \end{array} } $$ For the scan of $t 1$ we create two Bloom filter sub-plans, each with a single Bloom filter applied, but in each sub-plan that Bloom filter has a unique $\delta$ , one for each $\Delta$ in the Bloom filter candidate above. We include the cardinality for each of the sub-plans as follows: $$ \begin{array} { r l } & { \mathrm { ~ - ~ b f s = ~ } \tilde { \left[ ( \right]} a = t 1 . c 2 , b = t 2 . c 1 , \delta = \{ t 2 \} ) } \\ & { \mathrm { ~ - ~ r o w s = } \left| t 1 \hat { \times } t 2 \right| } \\ & { \bullet t \mathrm { 1 . b f - s u b p l a n _ { 1 } } . } \\ & { \mathrm { ~ - ~ b f s = } \left[ ( a = t 1 . c 2 , b = t 2 . c 1 , \delta = \{ t 2 , t 3 \} ) \right] } \\ & { \mathrm { ~ - ~ r o w s = } \left| t 1 \hat { \times } ( t 2 , t 3 ) \right| } \end{array} $$ Now, suppose the Bloom filters in both of these sub-plans yield the same estimated cardinality; for example, $| t 1 { \hat { \mathbf { \times } } } t 2 | = | t 1 { \hat { \mathbf { \times } } } ( t 2 , t 3 ) | =$ 22𝑀. In this case, the optimizer believes there is no added benefit in first joining $t 2$ to $t 3$ when building the Bloom filter. This follows from the fact that there is no local predicate on 𝑡3 that could be transferred through the join of $t 2$ and 𝑡 3 to $t 1$ . Next, we compute the cost for each sub-plan and try to add each sub-plan to the base relation’s plan-list. 𝑡1’s plan-list should have at least one pre-existing non-Bloom filter costed sub-plan with a row estimate of 600 million. Then, for each Bloom filter sub-plan, we add extra cost for applying the Bloom filter to each input row of 𝑡 1 as $$ \mathrm { \ e x t r a \ c o s t { = } } k * 6 0 0 M . $$ So, the cost of each Bloom filter sub-plan is equal. The first subplan, 𝑡1.bf-subplan0, will be accepted in $t 1$ ’s plan-list because it lowers the row count to 22 million (compared to 600 million), but the second sub-plan, $t 1 . { \mathrm { b f - s u b p l a n } } _ { 1 }$ , will be rejected since it has the same row count (22 million) and cost as 𝑡1.bf-subplan $\mathbf { \cdot } _ { 0 }$ and requires an additional relation on the build side of the join. For the scan of 𝑡3, we similarly have two Bloom filter sub-plans: $$ \begin{array} { r l } & \bullet \operatorname { \ t \} 3 . \operatorname { b f - s u b p l a n } _ { 0 } } \\ & { \phantom { { \bullet \ n e m s } } - \operatorname { b f s } = \big [ \big ( a = t 3 . c 1 , b = t 2 . c 2 , \delta = \{ t 2 \} \big ) \big ] } \\ & { \phantom { { \bullet \ n e m s } } - \operatorname { r o w s } = \big | t 3 \widehat { \star } t 2 \big | } \\ & { \bullet t 3 . \operatorname { b f - s u b p l a n } _ { 1 } } \\ & { \phantom { { \bullet \ n e m s } } - \operatorname { b f s } = \big [ \big ( a = t 3 . c 1 , b = t 2 . c 2 , \delta = \{ t 1 , t 2 \} \big ) \big ] } \\ & { \phantom { { \bullet \ n e m s } } - \operatorname { r o w s } = \big | t 3 \widehat { \star } \big ( t 1 , t 2 \big ) \big | } \end{array} $$ Now, in this example, the selectivity of the semi-join $t 3 \ltimes t 2$ is 0.77, which is beyond our threshold, so 𝑡3.bf-subplan $\mathbf { \sigma } _ { 0 }$ is rejected by Heuristic 6. However, the selectivity of semi-join $t 3 \ltimes \left( t 2 , t 1 \right)$ is 0.006, yielding a cardinality estimate for sub-plan $t 3 . \mathrm { b f - s u b p l a n } _ { 1 }$ of 36 thousand rows, much lower than existing sub-plan row estimates of 1 million for 𝑡 3. So, this sub-plan is accepted in $t 3$ ’s plan-list. The extra cost is $k * 1 M$ . # 3.6 Second bottom-up phase At the beginning of the second bottom-up pass, there now exists fully costed sub-plans for accessing every base table in the query. Some of these sub-plans may include the application of a Bloom filter (so-called Bloom filter sub-plans) and contain information about that Bloom filter, namely the build and apply columns, as well as the set of required $\delta$ relations. Since all Bloom filter subplans now have a cardinality estimate and are fully costed, bottomup optimization can proceed as usual, subject to some additional constraints. First, when joining a Bloom filter sub-plan to another sub-plan that provides the build relation for the Bloom filter, the join method for that pair of sub-plans must be a hash join, and the Bloom filter sub-plan must be on the outer (probe) side. Other join types will be satisfied by non-Bloom filter sub-plans. Second, if the other subplan provides any relation listed in the Bloom filter sub-plan’s $\delta$ , then the join method must be a hash join because the cardinality of the Bloom filter sub-plan assumes all the relations in $\delta$ will be on the inner (build) side of a hash join. Consequently, the inner side of the join must provide all relations in $\delta$ , with one exception: the inner side need not provide all relations in $\delta$ if it is also a Bloom filter sub-plan whose $\delta$ match the outstanding relations in the outer’s $\delta$ . Figure 3 illustrates why we allow this exception. Figure 3: During the first bottom-up pass, the join of $R _ { 0 }$ to $( R _ { 1 } , R _ { 2 } )$ may have been observed, as depicted in panel (a). Our approach creates a sub-plan for the scan of $R _ { 0 }$ with a Bloom filter built from $R _ { 1 }$ and an estimated cardinality of $| R _ { 0 } { \hat { \times } } ( R _ { 1 } , R _ { 2 } ) |$ . The sub-plan has property $\delta = \{ R _ { 1 } , R _ { 2 } \}$ because both $R _ { 1 }$ and $R _ { 2 }$ are required on the build side of the hash join (HJ) for the cardinality of that sub-plan to be accurate. During the second bottom-up pass, we forbid the join between sub-plans as depicted in panel (b) because $R _ { 2 }$ does not appear on the build side of the HJ, violating the cardinality assumptions on the sub-plan of $R _ { 0 }$ . Panel (c) shows an allowed join because, even though $R _ { 2 }$ does not appear on the build side of the HJ, $R _ { 0 }$ is being joined with a sub-plan that requires a BF built from $R _ { 2 }$ (i.e., the sub-plan of $R _ { 1 }$ has property $\delta = \{ R _ { 2 } \} ,$ ). So, any filtering that $R _ { 2 }$ does on $R _ { 1 }$ will effectively be transferred to $R _ { 0 }$ through the Bloom filter applied to $R _ { 1 }$ (i.e., $\mathbf { B F } ( R _ { 2 } ) )$ . The incomplete sub-plan in panel (c) may be completed in the next level of bottom-up optimization by a HJ with $R _ { 2 }$ as shown in panel (d), which we consider to be equivalent to the join order in panel (a) (ignoring any Bloom filter false positives). These additional constraints limit how we combine Bloom filter sub-plans, but they do not preclude the evaluation of all other nonBloom filter sub-plans. So, during this second bottom-up phase, we are evaluating more combinations of sub-plans than before: all the original non-Bloom filter sub-plans plus newly added Bloom filter sub-plans. Despite adhering to the property of delaying planning until sub-plans can be pruned, the search space is still expanded, and as such, we expect planning time to be increased. We discuss several additional heuristics to combat this increase in Section 3.10. When a Bloom filter sub-plan from a relation $R _ { 0 }$ is joined to a sub-plan from another relation $R _ { 1 }$ that resolves all Bloom filters in that sub-plan, the cost of the corresponding hash join for this subplan pair is computed. The cardinality estimate simply becomes the original cardinality estimate for the joined relation $( R _ { 0 } , R _ { 1 } )$ because all Bloom filters have been resolved in the joined subplan. Accordingly, the Bloom filter information is removed from the joined sub-plan so that it can compete against any other nonBloom filter sub-plans in the joined relation $( R _ { 0 } , R _ { 1 } )$ . When a Bloom filter sub-plan from relation $R _ { 0 }$ is joined to a sub-plan from another relation $R _ { 2 }$ that does not resolve all Bloom filters in that sub-plan, then the Bloom filter information from any unresolved Bloom filters is retained in the joined sub-plan. For example, if we have a sub-plan of $R _ { 0 }$ with required build-side relations $\delta = \{ R _ { 1 } \}$ , when we join that sub-plan to a sub-plan of $R _ { 2 }$ , the Bloom filter will not be resolved, so the new sub-plan of joined relation $( R _ { 0 } , R _ { 2 } )$ will retain the property $\delta = \{ R _ { 1 } \}$ . Its cardinality will typically be lower than other non-Bloom filter sub-plans in the plan-list for $( R _ { 0 } , R _ { 2 } )$ . Example 3.4. Second bottom-up phase. Continuing from Example 3.3, after Bloom filter costing, we have the following Bloom filter sub-plans included in the plan-list for each of the base relations $t 1$ and $t 3$ . $$ \begin{array} { r l } & { \bullet \quad { \tau } \mathrm { 1 . D I - s u p p l a n _ { 0 } } \cdot } \\ & { \qquad - \mathrm { ~ b f s } = \left[ \left( a = t 1 . c 2 , b = t 2 . c 1 , \delta = \{ t 2 \} \right) \right] } \\ & { \qquad - \mathrm { ~ r o w s } = 2 2 M } \\ & { \bullet \quad { t } \mathrm { 3 . b f - s u b p l a n _ { 1 } } } \\ & { \qquad - \mathrm { ~ b f s } = \left[ \left( a = t 3 . c 1 , b = t 2 . c 2 , \delta = \{ t 1 , t 2 \} \right) \right] } \\ & { \qquad - \mathrm { ~ r o w s } = 3 6 K } \end{array} $$ We also have all existing non-Bloom filter sub-plans in the respective plan-lists for all base relations. Next, we’d evaluate joining all combinations of sub-plans from the base relations, building a costed plan bottom-up. We’d observe the same ordered join pairs from the first bottom-up phase, but this time we’d evaluate the cost of all join types, namely nest loop join, merge join, and hash join for all sub-plans, including the new Bloom filter sub-plans. Join Relation: $( t 1 , t 2 )$ . 𝑡1 JOIN 𝑡2: Here, the required Bloom filter in sub-plan 𝑡1.bf-subplan $\mathbf { \sigma } _ { 0 }$ can be resolved by the inner relation 𝑡2. We compute the cost of the corresponding hash join and allow it to compete with the existing sub-plans in the plan-list for relation $( t 1 , t 2 )$ . It is accepted and removes several other higher-cost sub-plans from the plan-list. Since the joined sub-plan no longer requires any Bloom filters, the set of required build-side relations becomes null (i.e., $\delta = \emptyset$ ). 𝑡2 JOIN 𝑡1: Here, the required Bloom filter in sub-plan 𝑡1.bf-subplan $\mathbf { \sigma } _ { 0 }$ cannot be resolved because the Bloom filter build relation, 𝑡 2, is on the outer side. Other non-Bloom filter sub-plans are evaluated and added to the plan-list as usual. Figure 4: Join orders obtained for the running example in Section 3. Observed input row counts are shown in bold on the left (outer/probe) and right (inner/build) sides of each join; the planner’s row estimates are italicized. Post-processing application of Bloom filters (panel a), does not apply any Bloom filters. BF-CBO (panel b) modifies the join order to apply a Bloom filter on 𝑡1, significantly reducing the observed input row counts of each join. # Join Relation: $( t 2 , t 3 )$ . 𝑡2 JOIN 𝑡3: Here, the required Bloom filter in sub-plan 𝑡3.bf-subplan $\cdot 1$ cannot be resolved because the Bloom filter build relation, 𝑡 2, is on the outer side. • 𝑡3 JOIN 𝑡2: Here, the required Bloom filter in sub-plan 𝑡3.bf-subplan1 cannot be resolved even though the Bloom filter build-column appears on the inner side; 𝑡3.bf-subplan1 also requires 𝑡1 to appear on the inner side, i.e., it has property $\delta = \{ t 1 , t 2 \}$ . Join Relation: $( t 1 , t 2 , t 3 )$ . • $( t 1 , t 2 )$ JOIN $t 3$ : Here, the sub-plan 𝑡1.bf-subplan0 has already been resolved in relation $( t 1 , t 2 )$ , and the sub-plan 𝑡3.bf-subplan1 cannot be resolved because it is on the inner side. • 𝑡 3 JOIN $( t 1 , t 2 )$ : Here, the sub-plan 𝑡3.bf-subplan1 can be resolved because both of its required relations, $( t 1 , t 2 )$ , appear on the inner side of the join. However, this joined sub-plan is rejected in our example because the estimated size of the Bloom filter is too large (Heuristic 5). · $( t 2 , t 3 )$ JOIN 𝑡 1: Here, the sub-plan $t 1 . \mathrm { b f - s u b p l a n } _ { 0 }$ cannot be resolved because the required Bloom filter build relation is on the outer side. 𝑡 1 JOIN $( t 2 , t 3 )$ : Here, the sub-plan 𝑡1.bf-subplan ${ \bf \nabla } \cdot { \bf 0 }$ can be resolved because the required relation, 𝑡2, appears on the inner side of the join. We compute the cost of the corresponding hash join and allow it to compete with the existing sub-plans in the plan-list for relation $( t 1 , t 2 , t 3 )$ . However, in our example, it is rejected from the plan-list because there is an existing plan that has lower cost. The winning plan in our running example is shown in Figure 4. It is one that applies a Bloom filter to 𝑡 1 built from 𝑡 2, then joins in 𝑡 3. # 3.7 Post-processing Our costing method is limited to a single select-project-join query block, but there are sometimes useful Bloom filters that can be pushed through a sub-query. For this reason, we retain the postprocessing application of Bloom filters once the plan tree has been determined by BF-CBO. Bloom filters are added in places where either costing has determined they should be or where the preexisting post-processing approach would have marked one. Postprocessing repeats the assertion that the selectivity of the Bloom filter (ignoring false positives) be larger than a threshold and several other heuristics. # 3.8 Integration We integrate our BF-CBO method into GaussDB [12] (see also [13, 14, 17]). GaussDB is a cloud-native distributed relational database system with a bottom-up query optimizer. Its query optimizer is extended from that of PostgreSQL [20], notably with added support for distribution across multiple nodes as well as symmetric multiprocessing (SMP). We added Bloom filters to the GaussDB optimizer as a planner post-process to serve as a baseline (BF-Post), and our two-phase bottom-up BF-CBO approach is integrated into the cost-based optimization of a single query block (i.e., a single select-project-join block). The current work represents an initial implementation of BF-CBO that hasn’t yet been fully tuned in terms of heuristics applied; we expect further improvements once this work is complete. # 3.9 Runtime Implementation Our execution engine currently supports applying Bloom filters in an SMP, single-node deployment. In this setup, hash joins with a degree of parallelism (DOP) larger than one can be executed with various well-known streaming strategies. These streaming strategies influence how Bloom filters are used as follows: (1) Broadcast join, build-side broadcast. In this scenario, the buildside relation can originate from a single thread and be broadcast to $n$ threads before computing the hash join with the probe side’s $n$ threads. In this case, we build only one Bloom filter from one of the $n$ redundant hash tables on the buildside and use it on the Bloom filter apply-side relation. (2) Broadcast join, probe-side broadcast. In this scenario, the probe-side relation originates from a single thread before being broadcast to perform the hash join with $n$ threads on the build-side. In this case, the build side’s $n$ threads are not redundant, so we must create individual Bloom filters for each thread. We merge these Bloom filters by performing a union of their bit vectors and apply the merged Bloom filter to the single-threaded apply-side. (3) Partition join, partition-unaligned. In this scenario, both the build-side and the probe-side of the hash join are multithreaded and a redistribution operation on either side may be necessary to shuffle the data by grouping common values of a join column before $n$ partial hash joins are computed independently on each group of values. In this case, we build $n$ partial Bloom filters on the build-side, one for each partition of the hash join. A Bloom filter can, in general, be applied to a relation that is under some intermediate nodes on the probe-side, and the partitioning of that relation is not necessarily the same as the partitioning of the hash join in which the Bloom filter is built. When partitioning is unaligned like this, we can use the value of the Bloom filter partitioning column for distributed lookup of which Bloom filter partition to use, provided the partitioning column is available on the apply-side relation. When unavailable, we can use the bit vector merging strategy described for Broadcast join. (4) Partition join, partition-aligned. This scenario is also a partition join, but in this case, the partitioning of the relation to which the Bloom filter is applied is aligned with the partitioning of the hash join build-side. Here, we build $n$ partial Bloom filters on the build-side in the same way as the partition unaligned case. On the apply-side of the Bloom filter, for each partition of a relation, we simply apply the appropriate Bloom filter partition. While it is possible to account for these different streaming strategies in the Bloom filter cost model, we did not do so for the results in this paper. Table scans wait for all Bloom filter partitions to become available before scanning can proceed, regardless of streaming strategy. # 3.10 Bloom filter limiting heuristics Throughout the description of our two-phase bottom-up method, we described several heuristics we applied to limit the search space of evaluating Bloom filter sub-plans or to improve expected efficiency. They were described in the context of where we implemented them, but we list them here as a summary for the reader. • Heuristic 1: Bloom filter candidates are only applied on the larger relation for each hashable join clause (Section 3.3). • Heuristic 2: Bloom filter candidates are only applied on relations whose estimated cardinality surpasses a threshold (Section 3.3). • Heuristic 3: Bloom filters cannot be applied to foreign keys joining with lossless primary keys (Section 3.4). Heuristic 4: All Bloom filter candidates that can be applied on a relation must be applied simultaneously when creating scan sub-plans (Section 3.5). • Heuristic 5: If the expected size of the Bloom filter is beyond a threshold, a Bloom filter is not created (Section 3.5). • Heuristic 6: Bloom filters whose estimated selectivity is below a threshold are removed (Section 3.5). Several other heuristics could be applied to limit the search space of evaluating Bloom filter sub-plans, which we did not implement in our main results (Section 4.2). For example: • Heuristic 7: During planning, if a relation has too many Bloom filter sub-plans, only the one with the fewest estimated rows should be kept. Ties may be broken by keeping only the sub-plan with the lowest total cost. This heuristic should limit the search space of BF-CBO. We explore its effect in Section 4.4. Heuristic 8: If the total join-input cardinality during bottomup phase 1 is below a threshold, adding Bloom filter candidates should be skipped. With no Bloom filter candidates, the planning search space will not be expanded, so the second bottom-up phase would revert to normal CBO. This heuristic is meant to differentiate quick transactional queries, where additional optimization time is not necessary, from longrunning analytical queries, where the additional time spent considering Bloom filters during planning can make a big difference. The total join-input cardinality can be computed as the cumulative sum of the cardinality of join inputs, for all joins considered during bottom-up phase 1. The maximum join-input size could be an additional signal to decide if Bloom filter candidates should be skipped. • Heuristic 9: Allow Bloom filter candidates to be applied to both relations in a join clause, but keep only the 𝛿s that are smaller than the apply-side relation. This is a slightly more permissive alternative to Heuristic 1 that will consider Bloom filters for relations that are larger than the build-side relation, for any combination of joined relations that make up the build side. This allows a Bloom filter candidate to be applied to the smaller base relation of a join-clause pair, but only for cases where an intermediate join will reduce the size of the larger base relation of the join-clause pair. # 4 Experimental analysis # 4.1 Dataset and environment We ran our analysis on a TPC-H dataset of scale factor 100 (approximately $1 0 0 \mathrm { G B } \mathrm { \cdot }$ ). Each of the 22 TPC-H queries was run five times, with the average of the last four presented here to represent performance after data had been loaded into memory. The dataset tables were stored in a columnar format, range-partitioned by date, and foreign key constraints were added in compliance with TPC-H documentation. We ran all queries with a DOP of 48 as our experiments were performed on an $\mathbf { x } 8 6$ server with 48 CPUs and 503 GB memory. Several queries were run with query-specific database configuration parameters as other areas of our optimizer are actively being refined; we held these configuration parameters fixed between the baselines and BF-CBO for a fair comparison. In our experiments, the selectivity threshold was set to $\frac { 2 } { 3 }$ , so that Bloom filter candidates were kept only if they were expected to filter out at least $\begin{array} { l } { { \frac { 1 } { 3 } } } \end{array}$ of the rows (Heuristic 6). We only marked Bloom filter candidates if the number of rows in the table they were applied to was greater than 10 thousand (Heuristic 2). Bloom filters were considered too big if the estimated upper-bound number of distinct values on the build side was beyond 2 million (Heuristic 5). # 4.2 Results The latencies of the TPC-H queries are shown in Figure 5 with more details in Table 2. Single table queries (Q1 and Q6), as well as queries that did not produce Bloom filters in any scenario (Q13-15,22), are omitted from the analysis. The latencies are normalized to the latency of running the query without Bloom filters enabled (No BF). Table 2 also shows the percent reduction in query latency $( \% \downarrow )$ of BF-CBO compared to BF-Post as well as the absolute latencies of plan optimization for both BF-CBO and BF-Post. Note that planner runtime is included in the measurement of absolute query latency before normalization. Query numbers $( \mathrm { Q } \# )$ where BF-CBO selected a different plan than BF-Post are shown italicized in red. Across all analyzed queries, including Bloom filters in plan postprocessing (BF-Post) reduced the runtime by $2 8 . 8 \%$ , and including Bloom filters during bottom-up CBO reduced the runtime by $5 2 . 2 \%$ , relative to no Bloom filters at all. The addition of BF-CBO led to a $3 2 . 8 \%$ reduction in runtime compared to BF-Post, so there is a significant benefit to including the effect of Bloom filters during CBO rather than simply as a post process. Several queries had a large reduction in latency, such as Q7, Q8, Q12, Q16, Q20, and Q21. We will examine the query plans from some of these queries in subsequent sections to explain these improvements. Figure 5: Latencies for TPC-H queries are shown normalized to each query without any Bloom filters applied (No BF, shown as dashed line). Adding Bloom filters during plan postprocessing (BF-Post, shown in blue) reduces query latency by $2 8 . 8 \%$ . Including Bloom filters during bottom-up cost-based optimization (BF-CBO, shown in orange) improves query latency by a further $3 2 . 8 \%$ relative to BF-Post. TPC-H queries that did not apply any Bloom filters are omitted. Table 2 also shows that there is some overhead in planner runtime with BF-CBO compared to BF-Post. To plan all the queries, BF-CBO took $5 4 0 . 7 \mathrm { m s }$ while BF-Post took 254.3 ms. Many queries showed negligible overhead when using BF-CBO, but some queries, like Q8 and Q9, had large increases in planner latency. Increased planner runtime is expected with BF-CBO as there are more subplan combinations to search, but end-to-end, we see a large improvement in query latencies. The trade-off between increased planner runtime and query plan improvement will depend on the context, with BF-CBO being more appropriate for long-running analytical queries rather than quick transactional queries. As incorporating Bloom filters during query optimization will adjust the cardinality of scan nodes where the Bloom filter is applied, we expected an improvement in cardinality estimation as well. We found that BF-CBO had a mean absolute error (MAE) of $5 . 3 e ^ { 6 }$ for the cardinality estimates of all intermediate plan nodes, compared to $2 . 5 e ^ { 7 }$ for BF-Post, a $7 8 . 8 \%$ improvement. In fact, it follows that improving the cardinality estimate of Bloom filter table scans enables the improved query plans in this paper. Table 2: TPC-H query latencies # 4.3 Query Plan Analysis We showed in Figure 1 that BF-CBO allowed for the selection of a plan that positioned orders on the probe side of the hash join, provided that a Bloom filter would be applied during its scan. The reason this plan could be selected is that BF-CBO includes a subplan with a Bloom filter applied to orders, reducing the row estimate of orders from 150 million to 6.4 million rows, which in turn reduces the estimated cost of performing the hash join. Another example is shown in Figure 6 for TPC-H query 7, which shows the join order for its FROM clause, which appears in Listing 1. In this case, BF-CBO allows for a different join order that enables five Bloom filters to be applied instead of just one. BF-CBO uses a Bloom filter to significantly reduce the size of several large tables in query 7. It applies two Bloom filters to the lineitem table, reducing its row count to 4 million (relative to 16 million in BF-Post), and it applies a Bloom filter to the orders table, reducing its row count from 150 million to 20 million. The customer and supplier tables are also significantly reduced using Bloom filters. These reductions in row count mean that joins throughout the query are faster and query performance is better. One of the reasons this join order can achieve these reductions in row count is its effective predicate transfer. Note that in BF-Post (panel a), there is no Bloom filter applied to the lineitem table originating from orders—the reason for its absence is that the join clause between these two relations $( o . o r d e r k e y = l . o r d e r k e y )$ consists of a foreign key column (of lineitem) referencing an unfiltered primary key column (of orders). As explained earlier, in this scenario, a Bloom filter will not filter any rows of the foreign key column. However, when including Bloom filters in bottom-up CBO (panel a), the planner has information that the Bloom filter applied to orders (i.e., BF(𝑐)) filters out some of those primary keys, enabling the additional Bloom filter on lineitem (i.e., BF 𝑜 ). Similarly, the Bloom filter on orders (BF 𝑐 ) is only enabled by BF-CBO because the customer relation is pre-filtered by another Bloom filter based on n2.nationkey. So, we have effective predicate transfer of the nation predicates—they reduce the size of the customer table, which in turn allows a Bloom filter to reduce the size of the orders table, which in turn allows a Bloom filter to reduce the size of the lineitem table. Figure 6: Join orders for TPC-H query 7. Observed input row counts are shown in bold on the left (outer/probe) and right (inner/build) sides of each join. Streaming across threads is denoted by RD (redistribution) or BC (broadcast). Post-processing application of Bloom filters (panel a) applies a single Bloom filter on the relation 𝑙:lineitem built with respect to the relation 𝑠:supplier (black dashed arrow labeled BF(𝑠)). BF-CBO (panel b) changes the join order so that five Bloom filters can be applied, reducing the input row counts to many joins and improving the query latency by $8 3 . 7 \%$ . Another reason BF-CBO performs effectively is that it places Bloom filters in such a way that they can be pushed down through a join. By crossing an intermediate join, each Bloom filter effectively reduces the input of multiple joins, magnifying its effect. # Listing 1: TPC-H Q7 FROM Clause # s e l e c t n1 . n_name as supp_nation , n2 . n_name as c u s t _ n a t i o n , e x t r a c t ( year from l _ s h i p d a t e ) as l _ y e a r , l _ e x t e n d e d p r i c e $\ast$ ( 1 − l _ d i s c o u n t ) as volume from s u p p l i e r , l i n e i t e m , o r d e r s , cus tomer , n a t i o n n1 , n a t i o n n2 # where s_suppkey = l_suppkey and o_orderkey $\mathbf { \sigma } = \mathbf { \sigma }$ l _ o r d e r k e y and c _ c u s t k e y $\mathbf { \Sigma } = \mathbf { \Sigma }$ o _ c u s t k e y and s _ n a t i o n k e y $\mathbf { \mu } = \mathbf { \mu } _ { \mathrm { n } 1 }$ . n _ n a t i o n k e y and c _ n a t i o n k e y $\mathbf { \Sigma } = \mathbf { \Sigma }$ n2 . n _ n a t i o n k e y and ( ( n1 . n_name $\mathbf { \Sigma } = \mathbf { \Sigma }$ 'FRANCE ' and n2 . n_nam $\begin{array} { r l r } { \mathrm { ~ ~ \cdot ~ } } & { { } = } & { { \bf \cdot } \mathrm { ~ ~ \cdot ~ } } \end{array}$ 'GERMANY ' ) or ( n1 . n_name $\mathbf { \Sigma } = \mathbf { \Sigma }$ 'GERMANY ' and n2 . n_nam $\mathrm { ~ \ ~ \ ~ } : \ \cdot \ \mathrm { ~ \ ~ }$ FRANCE ' ) ) and l _ s h i p d a t e between date ' 1995 −01 −01 ' and date ' 1996 −12 −31 ' Readers will also observe from Table 2 that adding Bloom filters to query 18 did not improve its runtime, even for BF-Post. The runtime of query 18 is dominated by a sub-query (not shown) to which no Bloom filter is applied, so adding Bloom filters to other table scans in the query, built on the output of this sub-query, did not result in improved latency overall. Part of the reason latency is not improved may be that, in our implementation, table scans wait for any required Bloom filters to be fully built before the scan can proceed. So when the query runs without any Bloom filters, those scans may have been able to start in parallel with the sub-query, instead of waiting for it to complete. An alternative implementation could eagerly scan batches of data before the Bloom filter is fully built to take advantage of parallel processing, then once ready, switch to using the Bloom filter for any remaining batches to be scanned. However, we believe that it is usually better to wait for the Bloom filter before starting the scan, because downstream operators will benefit from reduced rows compared to eager scanning. When using BF-CBO, the entire down-stream query plan assumes that all Bloom filters will be fully utilized—violating this assumption may be detrimental. # 4.4 Limiting Bloom filter sub-plans In this section, we analyze the effect of enabling Heuristic 7, which limits the search space of Bloom filter sub-plans during bottom-up optimization. Specifically, if any given relation has too many Bloom filter sub-plans (more than four in our experiments) during bottomup optimization, we prune those sub-plans down to only one—we keep the one with the fewest expected rows (or the lowest cost, if rows are equal). By enabling this heuristic, we expect planning to be quicker, but with some opportunity lost in finding the best query plan. The results for this restriction are shown in Table 3. The queries where BF-CBO resulted in different query plans than in Table 2 are shown italicized in green. The columns for BF-Post are identical to those in Table 2, but are repeated here for convenience. Table 3: TPC-H query latencies, Heuristic 7 enabled The first notable difference when Heuristic 7 is enabled is that planning latencies are shorter. In total, the planning time of all queries was $4 2 1 . 9 \mathrm { m s }$ compared to $5 4 0 . 7 \mathrm { m s }$ with Heuristic 7 disabled. For queries Q8 and Q21, in particular, we save considerable time planning when we limit the search space by enabling Heuristic 7. However, the query plan for Q8 is worse with the search space limited, and the query runtime now degrades by $4 \%$ compared to BF-Post. There is a trade-off between planning latency and finding the best query plan. By limiting the search space through imposing Heuristic 7 we observe faster query planning latencies, but overall query latency is slightly degraded (a $3 1 . 4 \%$ reduction in latency over BF-Post compared to $3 2 . 8 \%$ ), indicating that for this dataset, it is still worthwhile to explore more paths. As such, our heuristics may require further tuning. There are two potential explanations for the worse result observed in Q8 when search space is limited. First, because we apply heuristics, we are removing some Bloom filter sub-plans from being considered. It is possible that the best plan appears in these removed Bloom filter sub-plans, but BF-CBO chooses a different plan because the cost of other sub-plans has been lowered by Bloom filters. BFPost may arrive at the best plan by chance, as Bloom filters are not considered during planning. Second, the worse result could be due to imperfect cardinality estimations or an imperfect cost model. Our method can be thought of as improving the estimated cardinality of base tables to which Bloom filters are applied; but it still makes use of pre-existing methods for estimating join cardinality and a pre-existing cost model. An imperfect cost model can sometimes lead to worse query plans, even with a better cardinality estimate.
Bloom filters are used in query processing to perform early data reduction and improve query performance. The optimal query plan may be different when Bloom filters are used, indicating the need for Bloom filter-aware query optimization. To date, Bloom filter-aware query optimization has only been incorporated in a top-down query optimizer and limited to snowflake queries. In this paper, we show how Bloom filters can be incorporated in a bottom-up cost-based query optimizer. We highlight the challenges in limiting optimizer search space expansion, and offer an efficient solution. We show that including Bloom filters in cost-based optimization can lead to better join orders with effective predicate transfer between operators. On a 100 GB instance of the TPC-H database, our approach achieved a 32.8% further reduction in latency for queries involving Bloom filters, compared to the traditional approach of adding Bloom filters in a separate post-optimization step. Our method applies to all query types, and we provide several heuristics to balance limited increases in optimization time against improved query latency.
[ "cs.DB" ]
# I. INTRODUCTION Large Language Models (LLMs) have demonstrated strong biomedical question-answering (QA) capabilities [1]. However, LLMs can produce factual inaccuracies, lack specific domain knowledge, and lack verifiability [2]. A major concern is hallucination, where LLMs generate factually incorrect responses due to their probabilistic nature. These hallucinations, together with a lack of verifiability, are particularly problematic in healthcare, where misinformation can lead to serious consequences. To mitigate these risks, Retrieval-Augmented Generation (RAG) systems leverage external knowledge sources at inference time by selecting relevant documents from a data store to enhance accuracy, transparency, and traceability [3]. Despite the potential of biomedical RAG systems, existing solutions often suffer from limited scalability, poor reproducibility, and suboptimal retrieval performance on large datasets such as PubMed1. Existing benchmarks for medical question answering, such as MedExpQA [4] and MIRAGE [5], lack reproducibility and scalable retrieval solutions. Most retrieval methods rely on either sparse bag-of-words vectors such as BM25 [6] or dense vectors created through transformer-based models such as BioBERT [7] and MedCPT [8]. However, hybrid approaches that integrate both techniques remain under-investigated, especially from a system perspective: the inherent trade-offs between retrieval strategies, their indexing and response times, and the resulting generator accuracy are crucial for practical RAG applications and have been largely unexplored. Hybrid retrieval methods combine the strengths of sparse and dense retrieval: a probabilistic retriever (e.g., BM25) efficiently reduces the search space by filtering a large corpus, while a neural reranker (e.g., MedCPT’s cross-encoder) refines document rankings based on semantic relevance. This two-step approach balances computational efficiency and retrieval precision, mitigating the limitations of stand-alone methods. Although hybrid retrieval has been explored in general NLP tasks [9], its application in large-scale biomedical QA remains limited, particularly in real-world implementations. Developing an effective biomedical QA system requires addressing several challenges: (i) Efficient Retrieval at Scale: Processing millions of biomedical documents under reasonable indexing times while maintaining low-latency retrieval; (ii) Relevance Optimization: Improving document ranking by integrating lexical retrieval with neural reranking; (iii) Context Integration: Structuring retrieved documents effectively to generate factually accurate and verifiable responses. This work presents a scalable and reproducible RAG system for biomedical QA, systematically evaluating hybrid retrieval strategies. The key contributions include: Hybrid Retrieval Approach: A two-stage retrieval pipeline that integrates BM25 (lexical retrieval) with MedCPT’s cross-encoder (semantic reranking), improving recall and precision. Scalability and Performance Analysis: Comparative evaluation of three common methods and systems, MongoDB, Elasticsearch and FAISS, for large-scale document retrieval efficiency. • Reproducibility and Transparency: Explicit citation of retrieved documents using PubMed IDs to ensure traceability in biomedical QA. This work advances scalable and reproducible biomedical QA systems, enhancing their real-world applicability in clinical and research environments. All our code is open-sourced.2 II. BIOMEDICAL QUESTION-ANSWERING WITH RAG Fig. 1: Biomedical Question Answering with RetrievalAugmented Generation (RAG). Offline phase: biomedical documents are processed, indexed, and stored in a vector store. Online phase: users ask questions, for which a retriever retrieves relevant documents that are appended with the question and fed to an LLM. Based on the question and context, the LLM generates an answer the PubMed IDs it used. Retrieval-Augmented Generation enhances the capabilities of LLMs by incorporating external data and grounding responses in verifiable and up-to-date information. This ensures that outputs incorporate relevant biomedical knowledge from sources such as PubMed [10], improving accuracy and transparency. Figure 1 illustrates our biomedical QA system based on RAG, which consists of two main phases: an offline phase for indexing biomedical literature and an online phase for retrieving relevant documents and generating responses. In the offline phase, biomedical documents are preprocessed and indexed into a vector store (dense vectors) and/or an inverted index (sparse vectors) for efficient retrieval. The choices directly affect retrieval speed and system scalability. During the online phase, users submit biomedical queries, which the retriever processes to fetch relevant documents. These retrieved documents, along with the query, are provided as context to an LLM, ensuring responses remain grounded in authoritative biomedical sources. The used sources are cited (PMIDs) and provided as references to users. In this setting, system performance should be evaluated based on three key aspects, as highlighted in Figure 1: (1) indexing and query time, which measures retrieval efficiency; (2) relevance of retrieved documents, ensuring the most informative sources are selected; and (3) answer correctness, verifying that the LLMgenerated response aligns with biomedical evidence. Evaluating and optimizing these factors improves the reliability and transparency of biomedical QA systems. # III. EXPERIMENTAL EVALUATION We evaluate the efficiency and effectiveness of different retrieval and text-generation methods for biomedical questionanswering (QA). Our experiments focus on selecting optimal components for document retrieval, text generation, and overall system performance following the three key aspects from Section II and using a common biomedical QA benchmark. # A. Experimental Setup Datasets. We evaluate our biomedical RAG system on the BIOASQ [11] QA benchmark, which builds on the PubMed database [5]. Specifically, we use a $10 \%$ randomly sampled subset of 2.4M biomedical papers for the component analysis, while for the final system, we use the entire dataset of 24M. Each entry includes a PubMed ID (PMID), title, and abstract, with an average abstract length of 296 tokens. We evaluate our system using the Task-B dataset, which contains expert-annotated questions paired with supporting PubMed IDs (PMIDs). To ensure answerability within our PubMed subset, we first exclude factoid and list questions, which often require full-text access, making evaluation less precise. Second, we retain only questions with at least one PMID in our dataset to ensure they can be answered using our subset. Indexing and Retrieval Systems. We compare three storage and query systems: Elasticsearch, FAISS, and MongoDB. MongoDB3, a NoSQL document database, supports fulltext search with TF-IDF-like scoring in its self-hosted version. While BM25 ranking is available in MongoDB Atlas Search, it is a cloud-only service and was not used. Elasticsearch4, built on Apache Lucene, uses BM25 ranking and inverted indexing for efficient text-based retrieval. FAISS (Facebook AI Similarity Search) optimizes dense vector similarity search, commonly used in NLP and recommendation systems [12]. We deployed FAISS using a Flaskbased server with a FlatL2 index for exhaustive search. Metrics: We evaluate indexing speed and response time on 2.4M PubMed papers to determine the best trade-off between efficiency and retrieval performance. Retrieval Methods. Based on recall and precision, we evaluate four retrieval methods—BM25, BioBERT, MedCPT, and a hybrid approach $\mathbf { ( B M 2 5 + M e d C P T ) }$ . BM25 [13] is a ranking algorithm that improves upon TFIDF [14] by incorporating term frequency, document length normalization, and inverse document frequency. It ranks documents based on query relevance using a probabilistic scoring function. We implemented BM25 in Elasticsearch, with stopword removal for improved efficiency. BioBERT [7] is a domain-specific adaptation of BERT [15], pre-trained on PubMed abstracts and PMC articles to enhance biomedical text understanding. We use BioBERT to encode PubMed abstracts into semantic vectors via FAISS, computing document-query similarity with squared Euclidean distance. MedCPT [8] is a contrastive learning-based retriever trained on 255M PubMed query-article interactions. It consists of a query encoder, document encoder, and a cross-encoder reranker. The crossencoder refines retrieval results by reranking top candidates based on query-document contextual interactions. We use # Prompt Template for Medical QA System Prompt: You are a scientific medical assistant designed to synthe size responses from specific medical documents. Only use the information provided in the documents to answer questions. The first documents should be the most relevant. Do not use any other information except for the documents provided. When answering questions, always format your response as a JSON object with fields for ’response’, ’used PMIDs’. Cite all PMIDs your response is based on in the ’used PMIDs’ field. Please think step-by-step before answering questions and provide the most accurate response possible. Provide your answer to the question in the ’response’ field. User Prompt: Answer the following question: ... Context Prompt: Here are the documents: "doc1": { "PMID": {...}, "title": {...}, "content": {...} "relevance_score": {...} MedCPT to encode $2 . 4 \mathrm { M }$ abstracts. We filter results based on positive relevance scores. The Hybrid Retriever integrates BM25 and MedCPT for enhanced retrieval performance. BM25 first ranks a broad set of documents in Elasticsearch, after which MedCPT’s cross-encoder reranks the top- $k$ results. This combination leverages BM25’s efficiency and MedCPT’s semantic understanding to improve recall and precision. Metrics: We assess how well each method retrieves relevant documents. Since recall is critical for ensuring comprehensive retrieval, we prioritize methods that maximize relevant document retrieval while maintaining high precision. Text Generation. For text generation, we experiment with different prompting strategies for OpenAI’s GPT-3.5-turbo (API version May 2024, temperature $\scriptstyle = 0 .$ ), ensuring that generated responses are accurate and contextually relevant. Given the biomedical domain’s strict accuracy requirements, we focus on structured prompts that enhance factual consistency. We experimented with multiple prompting approaches, following best practices in medical NLP [5], [16]. Due to resource constraints, we evaluated GPT-3.5, with limited testing of GPT4. Observations showed no significant differences in output quality. As illustrated in Figure 2, our final prompt consists of three components: (1) a system prompt with task-specific instructions, (2) a user query, and (3) retrieved documents with PubMed IDs (PMIDs), titles and content. Metrics: For text generation, we evaluate answer correctness in terms of accuracy, recall, precision, and $F l$ score. # B. Indexing and Query Time Table I summarizes the performance of Elasticsearch, FAISS, and MongoDB. Elasticsearch excels in full-text retrieval but is less efficient for semantic vector search, which FAISS optimizes for. However, due to their complex data management and indexing mechanisms, MongoDB and Elasticsearch exhibit the slowest indexing speeds. MongoDB, while providing a flexible NoSQL document storage solution, uses TF-IDF-based text ranking in its selfhosted version, which leads to significantly slower query response times compared to Elasticsearch and FAISS. The selfhosted MongoDB lacks efficient semantic retrieval, limiting its effectiveness in large-scale biomedical QA. Based on these results, we selected Elasticsearch for fulltext retrieval and FAISS for semantic vector search. Despite its slower indexing speed, Elasticsearch provides a robust textbased search framework, while FAISS offers superior response times for vector-based queries. TABLE I: Performance metrics for different search methods. # C. Document Relevancy Table II summarizes the retrievers’ performance. The Hybrid Retriever achieved the highest recall (0.567), balancing efficiency and accuracy. BM25 exhibited strong precision but lower recall. MedCPT improved semantic retrieval but underperformed in recall, while BioBERT had the weakest results due to a lack of fine-tuning for question-answering tasks. Note that a low recall score does not necessarily indicate incorrect retrieval; rather, it means that the retrieved documents may not be included in the BioASQ-curated set. TABLE II: Performance comparison of different retrievers. # D. Answer Correctness of the RAG System (End-to-End) For BM25, the query is processed using term-based retrieval, ranking documents based on query term occurrence. The top $k$ ranked documents are embedded into the LLM context for response generation. For MedCPT, the query is encoded into a vector and compared against document embeddings for similarity search. The retrieved documents are reranked by a cross-encoder, and only those with positive relevance scores are used for response generation. For Hybrid Retrieval, BM25 first retrieves $k$ candidate documents, which are then reranked by MedCPT’s cross-encoder. Only relevant documents are passed to the LLM. Our results show that the hybrid retriever achieves the best answer correctness on all metrics (Table III). TABLE III: Performance metrics of the end-to-end RAG system using different retrievers. # IV. EVALUATION OF THE FINAL SYSTEM After selecting the most efficient and effective components for our RAG system, we evaluate the final RetrievalAugmented Generation (RAG) system with the hybrid retrieval approach on the full 24M document PubMed corpus. for retrieval effectiveness, response time, and answer correctness. We use a Linux server with an Intel Broadwell processor (16 cores, 30GB RAM) and an NVIDIA A30 GPU (24GB VRAM) for cross-encoder reranking. # A. Effect of Retrieval Depth on Performance To evaluate the impact of retrieval depth on performance, we experimented with different configurations of BM25 retrieval, varying the number of initially retrieved documents while keeping the reranking step fixed at the top 10 (Table IV). TABLE IV: Comparison for different retrieval depths (BM25), with reranking applied to the top 10 documents. # B. Analysis of Retrieval Depth Trade-offs Elasticsearch BM25 retrieval has an average response time of $8 2 \pm 3 7 \mathrm { m s }$ , which remains constant across all retrieval depths since it ranks all documents regardless of how many are later passed to reranking. The primary factor affecting response time is the cross-encoder reranking step using MedCPT, which processes a subset of the retrieved documents and incurs additional computational overhead. Increasing the number of retrieved documents leads to marginal accuracy improvements but significantly increases the rerank time. Retrieving 50 documents before reranking yields the best accuracy (0.90) and F1 score (0.90) while keeping response time manageable at 1.91 seconds. However, retrieving 100 documents leads to a drop in accuracy (0.87) and an increase in total response time to 2.62 seconds, suggesting diminishing returns beyond 50 documents. The text generation phase relies on the OpenAI API, which introduces additional latency. The mean response time for generation is 1.07 seconds, with a standard deviation of 0.41 seconds. Since the generation time remains stable across configurations, the overall system latency is primarily determined by the retrieval depth and reranking time. These results demonstrate that increasing the number of retrieved documents beyond a certain threshold does not necessarily improve system performance. Instead, balancing retrieval depth with reranking efficiency is critical for realworld biomedical question-answering applications.
Biomedical question-answering (QA) systems require effective retrieval and generation components to ensure accuracy, efficiency, and scalability. This study systematically examines a Retrieval-Augmented Generation (RAG) system for biomedical QA, evaluating retrieval strategies and response time trade-offs. We first assess state-of-the-art retrieval methods, including BM25, BioBERT, MedCPT, and a hybrid approach, alongside common data stores such as Elasticsearch, MongoDB, and FAISS, on a ~10% subset of PubMed (2.4M documents) to measure indexing efficiency, retrieval latency, and retriever performance in the end-to-end RAG system. Based on these insights, we deploy the final RAG system on the full 24M PubMed corpus, comparing different retrievers' impact on overall performance. Evaluations of the retrieval depth show that retrieving 50 documents with BM25 before reranking with MedCPT optimally balances accuracy (0.90), recall (0.90), and response time (1.91s). BM25 retrieval time remains stable (82ms), while MedCPT incurs the main computational cost. These results highlight previously not well-known trade-offs in retrieval depth, efficiency, and scalability for biomedical QA. With open-source code, the system is fully reproducible and extensible.
[ "cs.IR", "cs.AI", "cs.DB", "cs.LG" ]
# 1 Introduction Although large language models (LLMs) have demonstrated remarkable performance across a wide range of general tasks (Jiang et al., 2023; Chowdhery et al., 2023; Jian et al., 2023; Touvron et al., 2023b), they still fall short in certain tasks or domains, such as reasoning (Tong et al., 2024; Srivastava and Gandhi, 2024; Yu et al., 2025; Li et al., 2025), multilingualism (Huang et al., 2023; Gurgurov et al., 2024; Zhang et al., 2024a), and text generation in specialized contexts (Biancotti et al., 2024; Zhang et al., 2024b; Yang et al., 2024a,b; Li et al., 2024b,a; Wang et al., 2024; Chang et al., 2025). To enhance the performance of LLMs in these challenging areas, a common practice is finetuning. However, with the growing size of current LLMs, full fine-tuning faces significant challenges in terms of computational efficiency and memory consumption. To mitigate these issues, parameterefficient fine-tuning (PEFT) methods have gained considerable attention (Houlsby et al., 2019; Li and Liang, 2021; Lester et al., 2021; Hu et al., 2022; Liu et al., 2022; Zhang et al., 2023; Yang et al., 2025). Among these methods, Low-Rank Adaptation (LoRA) (Hu et al., 2022) is regarded as one of the most efficient approaches. Nonetheless, its performance remains constrained due to the relatively small number of trainable parameters (Xu et al., 2023). Recent studies suggest that combining LoRA with the Mixture-of-Experts (MoE) paradigm, referred to as LoRA-MoE, by incorporating multiple LoRA modules, offers a promising solution to this limitation (Wu et al., 2024; Gao et al., 2024; Qing et al., 2024; Dou et al., 2024; Liu et al., 2023; Luo et al., 2024). Table 1: Compared to existing methods, our proposed GuiLoMo strategy can allocate the optimal expert numbers and ranks within LoRA-MoE, tailored to specific models and tasks. However, fully exploiting the potential of LoRAMoE remains an open research question. First, Gao et al. (2024) considered that uniformly allocating the number of experts across all layers is suboptimal, as different layers play distinct roles in the model. Over-allocating experts to certain layers can lead to redundancy and degraded performance. To address this, they proposed a group-wise expert allocation strategy (MoLA), which divides all layers into four groups and assigns varying numbers of experts to each group, ensuring that layers within the same group share the same number of experts. Building on this, Qing et al. (2024) introduced a layer-wise allocation strategy (AlphaLoRA), which theoretically determines the expert numbers for each layer based on its training quality. Despite these advancements, two critical limitations remain, as shown in Table 1: 1) These methods determine the expert number without considering the downstream task. This is problematic, as different tasks may have varying levels of complexity and specific needs, which should influence the optimal expert configuration (as supported by experiments in Appendix A); 2) These methods also overlook the intrinsic rank of LoRA experts, typically assigning the same rank to all LoRA experts. This uniformity leads to equivalent representational capacities across experts, causing them to capture similar information. Thus, LoRA-MoE struggles to handle diverse and complex inputs. To address these limitations, we propose GuiLoMo, a fine-grained strategy for jointly allocating layer-wise expert numbers and ranks in LoRA-MoE based on bilevel optimization with GuidedSelection vectors. GuiLoMo operates in two steps: 1) Obtaining GuidedSelection Vectors (GSVs): Through an initial optimization, GSVs are learned to guide LoRA-MoE in selecting the optimal expert numbers and ranks tailored to both the model backbone and the downstream task; 2) Allocating Expert Numbers and Ranks: After the prior optimization, the optimized GSVs are used to allocate expert numbers and ranks for LoRA-MoE, followed by the final training phase. To summarize, our contributions are as follows: 1) To further unlock the potential of LoRAMoE, we propose GuiLoMo, a fine-grained layerwise expert numbers and ranks allocation strategy based on proposed GuidedSelection Vectors. 2) We conduct extensive experiments on a wide range of tasks, including natural language understanding, question answering, and mathematical reasoning, demonstrating the effectiveness of GuiLoMo. For instance, GuiLoMo achieves an average $2 . 6 1 \%$ improvement on mathmatical reasoning tasks with LLaMA- $2 _ { 7 \mathrm { B } }$ . Further analysis confirms the effectiveness of GuidedSelection vectors in selecting optimal expert numbers and ranks. 3) We provide valuable insights into the relationship between expert numbers, ranks, and their assigned layers. For example, we observe that multi-head attention (MHA) benefits more from increased expert numbers and ranks in bottom and top layers, whereas feed-forward networks (FFN) only exhibit this behavior in middle and top layers. # 2 Preliminary LoRA-MoE Framework LoRA-MoE integrates multiple vanilla LoRA experts into each pre-trained LLM submodule. Vanilla LoRA (Hu et al., 2022) efficiently adapts large models to downstream tasks by lowering computational and memory costs. For a pre-trained weight matrix $\mathbf { W _ { 0 } } \in \mathbb { R } ^ { m \times n }$ , LoRA creates two low-rank trainable matrices A and $\mathbf { B }$ , where $\mathbf { B } \in \mathbb { R } ^ { m \times r }$ , $\mathbf { A } \in \mathbb { R } ^ { r \times n }$ , where $r \ll$ $m i n ( m , n )$ . During training, $\bf { W _ { 0 } }$ remains fixed while A and $\mathbf { B }$ are updated via gradient descent. The output representation $h$ is defined as follows: $$ \mathbf { h } = \mathbf { W _ { 0 } } x + \mathbf { B } \mathbf { A } x $$ Every traditional LoRA-MoE layer incorporates $N$ LoRA experts. The forward pass through the layer can be formulated as: $$ \mathbf { h } = \mathbf { W } _ { 0 } x + \sum _ { i = 1 } ^ { N } \mathbf { G } ( x ) _ { i } \mathbf { B _ { i } } \mathbf { A _ { i } } x $$ where $\mathbf { G } ( x ) \ = \ \mathrm { S o f t m a x } ( x \mathbf { W _ { r } } )$ represents the router in the LoRA-MoE layer. $\mathbf { W } _ { r }$ is the trainable parameter matrix of the routing network that directs input $x$ to different experts. By adaptively allocating inputs, the router promotes expert specialization, enhancing their ability to handle diverse tasks and input patterns. Applying LoRA-MoE for LLMs LoRA-MoE is applied to key modules of LLMs, namely multihead attention (MHA) and feed-forward networks (FFNs). In MHA, inputs are projected via $\mathbf { W } ^ { Q }$ , $\mathbf { W } ^ { K } , \mathbf { W } ^ { V }$ , and $\mathbf { W } ^ { O } \bar { \in } \mathbb { R } ^ { d \times d }$ . Each FFN uses gateand up-projection matrices $\mathbf { W } ^ { G }$ , $\mathbf { W } ^ { U } \in \mathbb { R } ^ { d \times d ^ { \prime } }$ , a activation (e.g., GELU), and a down-projection ${ \bf W } ^ { D } \in \mathbb { R } ^ { d ^ { \prime } \times \bar { d } }$ , where $d ^ { \prime } > d$ . GuiLoMo assigns optimal expert number and rank to these matrices. Figure 1: An illustration of our GuiLoMo strategy. GuiLoMo involves two steps: (Step 1): Exploring the optimal number of experts and ranks via a bilevel optimization algorithm with GuidedSelection Vectors. (Step 2): Allocate optimal expert number and rank based on GuidedSelection Vectors obtained in the previous step. # 3 Method In this section, we present our GuiLoMo strategy, which consists of two main steps: 1) A bilevel optimization algorithm is employed to obtain GuidedSelection Vectors (GSVs) of expert and rank for each module, tailored to the specific downstream task and model (§ 3.1); 2) Based on the obtained GSVs, the optimal expert number and rank for each module in LoRA-MoE are allocated, and the final training is then conducted $( \ S \ 3 . 4 )$ . See $\ S 3 . 2$ and $\ S 3 . 3$ for details of GSVs. # 3.1 Bilevel Optimization for Obtaining the GuidedSelection Vector In this section, we introduce the objective of bilevel optimization used to obtain GuidedSelection Vectors and its optimization process. Optimization Objective Formally, our objective is to automatically determine the optimal expert number $e _ { i } ^ { * }$ for a given module (e.g., down-projection in FFN) within the $i$ -th layer, and the optimal rank $r _ { i , j } ^ { * }$ for the $j$ -th expert under a specified LLM and downstream task setting. To achieve this, we formulate the problem as an optimization task. In this process, we introduce the Expert GuidedSelection Vector $\mathbf { g } _ { E }$ and Rank GuidedSelection Vector $\mathbf { g } _ { R }$ as key components of the optimization, and the optimization objective is: $$ \operatorname* { m i n } _ { \{ \mathbf { g } _ { E } , \mathbf { g } _ { R } \} } \mathcal { L } ( \mathcal { D } , \pi _ { \theta } , \mathbf { g } _ { E } , \mathbf { g } _ { R } ) $$ $$ \mathcal { L } = \mathcal { L } _ { \mathrm { S F T } } + \mathcal { L } _ { \mathrm { B A L } } $$ where $\pi _ { \boldsymbol { \theta } }$ is specific LLM and $\mathcal { L } _ { \mathrm { S F T } }$ denotes the supervised fine-tuning loss, which is computed via autoregressive language modeling on the downstream dataset $\mathcal { D }$ , while ${ \mathcal { L } } _ { \mathrm { B A L } }$ (refer to Eq. 16) represents the MoE balancing loss (Fedus et al., 2022; Zoph et al., 2022), which is introduced to encourage balanced utilization across experts and prevent expert collapse. The GuidedSelection Vector $\mathbf { g } _ { E } \in \mathbb { R } ^ { e _ { \operatorname* { m a x } } }$ and $\mathbf { g } _ { R } \in \mathbb { R } ^ { r _ { \operatorname* { m a x } } }$ are both trainable, with $e _ { \mathrm { m a x } }$ and $r _ { \mathrm { m a x } }$ representing the predefined maximum number of experts and ranks (see $\ S 3 . 2$ and $\ S 3 . 3$ for more details of $\mathbf { g } _ { E }$ and $\mathbf { g } _ { R }$ ). Since the optimization of $\mathbf { g } _ { E } , \mathbf { g } _ { R }$ should be under the optimal $\pi _ { \theta } ^ { * }$ , we draw inspiration from Liu et al. (2019) and formulate the problem as a bilevel optimization: $$ \begin{array} { r l } & { \underset { \{ \mathbf { g } _ { E } , \mathbf { g } _ { R } \} } { \operatorname* { m i n } } \mathcal { L } ( \mathcal { D } _ { 1 } , \pi _ { \theta } ^ { * } , \mathbf { g } _ { E } , \mathbf { g } _ { R } ) } \\ & { \mathrm { s . t . } \pi _ { \theta } ^ { * } = \arg \underset { \pi _ { \theta } } { \operatorname* { m i n } } \mathcal { L } ( \mathcal { D } _ { 2 } , \pi _ { \theta } , \mathbf { g } _ { E } , \mathbf { g } _ { R } ) } \end{array} $$ where $\mathcal { D } _ { 1 }$ and $\mathcal { D } _ { 2 }$ are two splits of the training set $\mathcal { D }$ with equal size. Optimization Process Based on the above objective, we formulate the overall procedure for obtaining the optimized GSVs in only a few $T$ training steps. For a specific $t$ -th training step, we first obtain $\pi _ { \theta } ^ { * } ( t )$ following Eq. 6, and then optimize the GSVs $\mathbf { g } _ { E / R } ^ { ( t ) } = \{ \mathbf { g } _ { E } , \mathbf { g } _ { R } \} ^ { ( t ) }$ with $\pi _ { \theta } ^ { * } ( t )$ to obtain g(Et/+R1) = {gE, gR}(t+1) following Eq. 7.1 Finally, we use g(Et/+R1) to obtain πθ(t + 1) for next step following Eq. 8. $$ \begin{array} { r l } { \pi _ { \theta } ^ { * } ( t ) = \pi _ { \theta } ( t ) - \xi _ { \theta } * \nabla _ { \pi _ { \theta } ( t ) } \mathcal { L } ( \mathcal { D } _ { 2 } , \pi _ { \theta } ( t ) , \mathbf { g } _ { E } ^ { ( t ) } , \mathbf { g } _ { R } ^ { ( t ) } ) } & { { } ( 6 ) } \\ { \mathbf { g } _ { E / R } ^ { ( t + 1 ) } = \mathbf { g } _ { E / R } ^ { ( t ) } - \xi _ { \mathbf { g } } * \hat { \nabla } _ { \mathbf { g } _ { E / R } ^ { ( t ) } } \mathcal { L } ( \mathcal { D } _ { 1 } , \pi _ { \theta } ^ { * } ( t ) , \mathbf { g } _ { E } ^ { ( t ) } , \mathbf { g } _ { R } ^ { ( t ) } ) } & { { } ( 7 ) } \\ { \pi _ { \theta } ( t + 1 ) = \pi _ { \theta } ( t ) - \xi _ { \theta } * \nabla _ { \pi _ { \theta } ( t ) } \mathcal { L } ( \mathcal { D } _ { 2 } , \pi _ { \theta } ( t ) , \mathbf { g } _ { E } ^ { ( t + 1 ) } , \mathbf { g } _ { R } ^ { ( t + 1 ) } ) } & { { } ( 6 ) } \end{array} $$ where $\xi _ { \theta }$ and $\xi _ { \mathbf { g } }$ are the learning rate for updating LLM weights and GSVs, respectively. $\hat { \nabla }$ indicates that we apply the STGE technique to ensure proper gradient flow (See $\ S 3 . 2$ and $\ S 3 . 3$ for more details). The overall optimization procedure is summarized in Alg. 1. The final obtained $\mathbf { g } _ { E } ^ { * }$ and $\mathbf { g } _ { R } ^ { * }$ determine the optimal number of experts and ranks according to the strategy described in $\ S \ : 3 . 4$ . GuiLoMo progressively learns the optimal heterogeneous LoRAMoE configuration, allowing it to meet model- and task-specific needs. # Algorithm 1: Optimization Process 7 Derive the optimized Expert $\mathrm { G S V } \ \mathbf { g } _ { E } ^ { * }$ and Rank GSV $\mathbf { g } _ { R } ^ { * }$ . # 3.2 Expert GuidedSelection Vector For the Expert GSVs $\mathbf { g } _ { E } \in \mathbb { R } ^ { e _ { \operatorname* { m a x } } }$ , we first predefine the maximum expert number $e _ { \mathrm { m a x } }$ and initialize them with Gaussian distribution: $$ \begin{array} { r } { \mathbf { g } _ { E } = \mathrm { S o f t m a x } ( \alpha ) , \quad w i t h \alpha = \{ \alpha _ { i } \} _ { i = 1 } ^ { e _ { \operatorname* { m a x } } } } \end{array} $$ where $\alpha _ { i } \sim \mathcal { N } ( 0 , 1 )$ , and $\mathbf { g } _ { E }$ denotes the selection probabilities for different allocated expert number settings. GuiLoMo selects the expert number setting by taking the index of the maximum value in $\mathbf { g } _ { E }$ . For example, if the maximum value of $\mathbf { g } _ { E } ^ { i }$ at the $i$ -th layer occurs at the 3-th position during the current training step, we allocate 3 experts for this module (see the green region in Fig. 1). Since $\mathbf { g } _ { E } ^ { i }$ is learned through a few optimization steps on the task-specific data, the expert selection process described above needs to be differentiable. To guarantee gradient flow and enable end-to-end optimization, we adopt the Straight-Through Gradient Estimator (STGE) (Bengio et al., 2013) along with an auxiliary virtual vector $\mathcal { M } _ { E }$ to approximate discrete selection while maintaining differentiability. Let $n ^ { \star }$ denote the index of the maximum value in $\mathbf { g } _ { E }$ . The forward propagation of the expert virtual vector $\mathcal { M } _ { E } \in \{ 0 , - \infty \} ^ { e _ { \operatorname* { m a x } } }$ is formulated as follows: $$ \mathcal { M } _ { E } ^ { i } = \left\{ \begin{array} { l l } { 0 , } & { \mathrm { i f ~ } i \leq n ^ { \star } } \\ { - \infty , } & { \mathrm { i f ~ } i > n ^ { \star } } \end{array} \right. $$ For example, when allocating 3 experts, the expert virtual vector $\mathcal { M } _ { E }$ is: $[ 0 , 0 , 0 , - \infty , . . . , - \infty ]$ . Meanwhile, in the backward propagation, we propagate the gradient flow from $\mathcal { M } _ { E }$ to $\mathbf { g } _ { E }$ : $$ \frac { \partial \mathcal { L } } { \partial \mathbf { g } _ { E } } = \varkappa ( \frac { \partial \mathcal { L } } { \partial \mathcal { M } _ { E } } ) $$ For more details on the $\mathcal { H }$ operation, please refer to Appendix G. The $\mathcal { M } _ { E }$ is applied to top- $\mathbf { \nabla } \cdot \mathbf { K }$ routing process to guide the learning of $\mathbf { g } _ { E }$ : $$ \hat { G } ( x ) = \frac { \mathrm { T o p K } ( \mathrm { S o f t m a x } ( x \mathbf { W } _ { r } + \mathcal { M } _ { E } ) , K ) _ { i } } { \sum _ { i = 1 } ^ { K } \mathrm { T o p K } ( \mathrm { S o f t m a x } ( x \mathbf { W } _ { r } + \mathcal { M } _ { E } ) , K ) _ { i } } $$ where ${ \bf W } _ { r }$ denotes the weight of routing network. # 3.3 Rank GuidedSelection Vector The Rank GSVs $\mathbf { g } _ { R \in \mathbb { R } ^ { r _ { \operatorname* { m a x } } } }$ shares a similar concept with the Expert GSVs during bilevel optimization. It begins by predefining the maximum rank $r _ { \mathrm { m a x } }$ and is also initialized with Gaussian distribution using Eq. 9. However, the semantic meaning of each element differs, where each element in $\mathbf { g } _ { R }$ represents a specific rank assigned to the corresponding expert. We select the index of maximum value in $\mathbf { g } _ { R }$ , i.e., $m ^ { \star }$ , to determine the rank for the current training step. Similar to $\mathbf { g } _ { E } , \mathbf { g } _ { R }$ is nondifferentiable during this process; therefore, we design rank virtual vector $\mathcal { M } _ { R } \in \{ 0 , 1 \} ^ { r _ { \operatorname* { m a x } } }$ to address this issue: $$ \mathcal { M } _ { R } ^ { i } = \left\{ \begin{array} { l l } { 1 , } & { \mathrm { i f ~ } i \leq m ^ { \star } } \\ { 0 , } & { \mathrm { i f ~ } i > m ^ { \star } } \end{array} \right. $$ Table 2: Accuracy comparison of different methods under direct fine-tuning for each dataset. MoLA(5) indicates assigning a uniform 5 experts to each layer. Uniform(8) denotes setting all the rank of LoRA expert to 8. For example, if the maximum value of $\mathbf { g } _ { R }$ at a given training step is located at the 4-th element, the rank for this module is set to 4 (see the yellow region in Fig. 1). Accordingly, the corresponding Rank GuidedSelection Vector $\mathcal { M } _ { R }$ is $[ 1 , 1 , 1 , 1 , 0 , . . . , 0 ]$ . Then, we parameterize on each LoRA expert matrix, denoted as $\Delta = \mathbf { B } \mathbf { A } \in \mathbb { R } ^ { m \times n }$ (Eq. 1), in a form that mimic singular value decomposition (SVD) to obtain $\Delta = { \bf P } \pmb { \Lambda } \mathbf { Q }$ . $\mathbf { P } \in \mathbb { R } ^ { d _ { 1 } \times r _ { \operatorname* { m a x } } }$ and $\mathbf { Q } \in \mathbb { R } ^ { r _ { \operatorname* { m a x } } \times d _ { 2 } }$ correspond to the original LoRA matrices $\mathbf { B }$ and A, respectively, and $\pmb { \Lambda }$ are initialized to 1. Note that we do not perform exact SVD. Subsequently, the rank virtual vector $\mathcal { M } _ { R }$ is integrated with $\pmb { \Lambda }$ and is incorporated into Eq. 2 to perform forward propagation: $$ \mathbf { h } ^ { \prime } = \mathbf { W } _ { 0 } x + \sum _ { i = 1 } ^ { K } \hat { \mathbf { G } } ( x ) _ { i } \mathbf { P } ( \mathcal { M } _ { R } \odot \mathbf { \mathbf { A } } \odot \mathbf { \mathbf { Q } } x ) $$ where $\odot$ denotes element-wise dot product, and $\hat { \mathbf { G } }$ is defined in Eq. 12. $\mathcal { M } _ { R }$ guide the learning of $\mathbf { g } _ { R }$ , and its gradients are backpropagated in the same manner as $\mathcal { M } _ { E }$ in Eq. 11, using STGE technique. # 3.4 Allocating Expert Number and Rank via GSV After obtaining optimized expert and rank GSVs, i.e., $\mathbf { g } _ { E } ^ { * }$ and $\mathbf { g } _ { R } ^ { * }$ , the optimal expert number $e ^ { * }$ and rank $r ^ { * }$ are determined by selecting the index corresponding to the maximum values: $$ \begin{array} { r } { e _ { i } ^ { * } = \underset { { \bf { \sigma } } } { \mathrm { a r g m a x } } ~ ( { \bf { g } } _ { E } ^ { * } i ) } \\ { r _ { i , j } ^ { * } = \underset { { \bf { \sigma } } } { \mathrm { a r g m a x } } ~ ( { \bf { g } } _ { R } ^ { * i , j } ) } \end{array} $$ where $e _ { i } ^ { * } \leq e _ { \operatorname* { m a x } }$ and $r _ { i , j } ^ { * } \leq r _ { \operatorname* { m a x } }$ denote the assigned expert number and rank in the $i$ -th layer and the rank of the $j$ -th expert in the $i$ -th layer, respectively. Subsequently, we fine-tune the model using the loss function defined in Eq. 4 with expert number $e ^ { * }$ and rank $r ^ { * }$ , where the LoRA-MoE weights are initialized with $\pi _ { \theta } ^ { * } ( T )$ . # 4 Experiment In this section, we conduct extensive experiments to examine the performance of GuiLoMo. We also conduct extra experimental analyses to gain deeper insights into this field, as presented in $\ S 4 . 3$ . Implementation details can be found in Appendix D. # 4.1 Experimental Settings Datasets Following Qing et al. (2024), we evaluate our model on three natural-language understanding (NLU) tasks from GLUE: (1) the Microsoft Research Paraphrase Corpus (MRPC) (Dolan and Brockett, 2005), (2) the Recognizing Textual Entailment (RTE) dataset (Wang et al., 2019), (3) the Corpus of Linguistic Acceptability (CoLA) (Wang et al., 2019), and three reasoningfocused question-answering (QA) tasks: (1) ScienceQA (Lu et al., 2022), (2) CommonsenseQA (Talmor et al., 2019), and (3) OpenBookQA (Mihaylov et al., 2018). We also evaluate GuiLoMo on mathematical reasoning benchmarks. Specifically, we perform instruction tuning on the MetaMathQA (Yu et al., 2024) dataset and evaluate on three benchmarks: (1) MultiArith (Roy et al., 2015), (2) SVAMP (Patel et al., 2021), and (3) GSM8K (Cobbe et al., 2021). Table 3: The results of mathematical reasoning under three models. M(5)-U(8) denotes MoLA(5)– Uniform(8); A-U(8) denotes AlphaLoRA-Uniform(8); $\mathbf { M } ( 5 ) + \mathbf { S }$ denotes MoLA(5) $^ +$ SoRA; $\mathsf { A } + \mathsf { S }$ : AlphaLoRA $+ { \mathrm { S o R A } }$ . MoLA(5) indicates assigning a uniform 5 experts to each layer. Uniform(8) represents setting all the rank of LoRA expert to 8. See Appendix B for the detailed statistics of all the datasets used in our experiments. Models We have applied our method to LLaMA7B (Touvron et al., 2023a), LLaMA- $2 _ { 7 \mathrm { B } }$ (Touvron et al., 2023b), LLaMA- $3 _ { 8 \mathrm { B } }$ (Cobbe et al., 2021), and Mistral- ${ \bf \nabla } \cdot { \bf v } 0 . 1 _ { 7 \bf B }$ (Jiang et al., 2023). Baselines We compare our GuiLoMo strategy with current state-of-the-art (SOTA) methods including MoLA (Gao et al., 2024), AlphaLoRA (Qing et al., 2024), $\mathbf { M o L A + S o R A }$ , and AlphaLoRA $. +$ SoRA. SoRA (Ding et al., 2023) is a variant of LoRA that allows for dynamic adjustments to the intrinsic rank during the adaptation process.2 Implementation details of baselines can be found in Appendix E. # 4.2 Main Result Table 2 reports the results on three NLU tasks and three QA benchmarks. Across these datasets, GuiLoMo surpasses every baseline in terms of Avg. performance. Specifically, relative to AlphaLoRAUniform $( 8 ) ^ { 3 }$ , GuiLoMo delivers consistent gains of $0 . 6 1 \%$ , $0 . 6 4 \%$ , and $0 . 8 4 \%$ on the three model settings, respectively. GuiLoMo also outperform baselines on mathematical reasoning task. As show in Table 3, GuiLoMo exceeds AlphaLoR $\mathbf { A } + \mathbf { S } \mathbf { o R A }$ by an average of of $2 . 4 8 \%$ , $2 . 6 1 \%$ , and $0 . 4 3 \%$ on $\mathrm { L L a M A _ { 7 B } }$ , LLaMA- $2 _ { 7 \mathrm { B } }$ , and LLaMA- $\cdot 3 _ { 8 \mathrm { B } }$ , respectively. Based on these observations, we conclude that: GuiLoMo, which flexibly allocates expert number and rank tailored to both model- and taskspecific demands, further unleashes the potential of LoRA-MoE and leads to improved performance. Table 4: Average results of ablation studies on GuiLoMo across six tasks. MoLA(5) indicates assigning a uniform 5 experts to each layer. Uniform(8) represents setting all the rank of LoRA expert to 8. See Table 8 for detailed results. “w/o” means the exclusion of this strategy from GuiLoMo. # 4.3 Further Analysis Ablation Study of GuiLoMo Strategy We conduct ablation studies to assess the effectiveness of GuiLoMo with LLaMA- ${ \bf \cdot } 2 _ { 7 \mathrm { B } }$ across NLU and QA benchmarks on two different settings: (1) a fixed uniformly distributed number of experts with varying ranks, (2) a fixed uniformly assigned rank with varying expert allocation. As shown in Table 4, compared with the uniformly-allocated baseline MoLA(5)-Uniform(8), applying GuiLoMo exclusively to expert allocation or exclusively to rank allocation results in average performance improvements of $1 . 9 5 \%$ and $1 . 5 3 \%$ , respectively. The results also show that excluding either expert allocation or rank allocation from GuiLoMo leads to performance drops of $1 . 5 0 \%$ and $1 . 1 0 \%$ , respectively. Accordingly, we highlight the following insight: Insight 1. Jointly optimizing both expert and rank allocations outperforms optimizing either one in isolation. Results across Model Families and Scales We conduct extra experiments on another family model Mistral- ${ \bf \cdot v 0 . 1 7 B }$ and larger-scale model LLaMA- $2 _ { 1 3 \mathrm { B } }$ across three benchmarks to examine the generalization of our GuiLoMo. As shown in Table 5, GuiLoMo achieves average score improvements of $0 . 7 9 \%$ and $0 . 1 8 \%$ over the AlphaLoRA $. +$ SoRA on LLaMA- $2 _ { 1 3 \mathrm { B } }$ and Table 5: The scores on MRPC, COLA, and ComQA under the Mistral- $\mathrm { \Delta \ v 0 . 1 _ { 7 B } }$ and LLaMA- $2 _ { 1 3 \mathrm { B } }$ models. Avg.: the average score over these three benchmarks. Figure 2: A comparative study of perturbed expert number $e ^ { * }$ and rank $r ^ { * }$ at different layers (8-th, 16-th, and 24-th). $\mathrm { I E N } ( ^ { * } )$ and $\mathrm { D E N } ( ^ { * } )$ denote the addition and removal of \* experts, respectively. MRA_half(\*): Half of the LoRA experts have their ranks increased by \*, while the other half have their ranks decreased by \* accordingly. MRA_random: Randomly shuffling the ranks of LoRA experts. Mistral- ${ \bf \cdot v 0 . 1 _ { 7 B } }$ , respectively. The results further validate the widespread effectiveness of GuiLoMo across models of different scales and families. Effectiveness of the Expert Number and Rank Assigned by GuiLoMo To validate the effectiveness of expert number $e ^ { * }$ and rank $r ^ { * }$ assigned by GuiLoMo that is tailored to specific models and tasks, we additionally conduct experiments with the following three strategies using LLaMA- $2 _ { 7 \mathrm { B } }$ on COLA benchmark. 1) Increase in Expert Number (IEN) , increasing the number of experts while keeping the total rank $\begin{array} { r } { ( \sum _ { i = 1 } ^ { N } \sum _ { j = 1 } ^ { e _ { i } ^ { * } } r _ { i , j } ^ { * } ) } \end{array}$ constant; 2) Decrease in Expert Number (DEN): Decreasing the number of experts while keeping the total rank constant 4; 3) Mixed Rank Adjustment (MRA) $\because$ Keeping the number of experts fixed, we randomly reassign ranks while keeping the total rank unchanged. Note that only the expert number and rank of the specific $m$ -th layer are intervened using the above three strategies, while the expert number and rank of the remaining layers remain unchanged (allocated by GuiLoMo). We apply these strategies to three layers (8, 16, 24) and report the results in Fig. 2. The results show that GuiLoMo outperforms all modified configurations, achieving the highest overall performance. From the results, we distill the following insight: Figure 3: Total Rank of sub-modules (MHA and FFN) across different layer ranges in LLaMA- $3 _ { 8 \mathrm { B } }$ on CommonsenseQA. Figure 4: Total number of allocated experts for submodules (MHA and FFN) across different layer ranges in LLaMA- $3 _ { 8 \mathrm { B } }$ on CommonsenseQA. Insight 2. GuiLoMo allocates layer-wise optimal expert number and rank, better exploiting the potential of LoRA-MoE. Allocation for MHA and FFN To delve deeper, we also observe the allocation patterns for MHA and FFN separately. We report the total assigned rank and average number of allocated experts for MHA and FFN under different layer ranges in Fig. 3 and Fig. 4, respectively. For example, the total rank (Total Rank of Submodules $\ b =$ $\textstyle \overline { { ( \sum _ { i = 1 } ^ { 8 } \sum _ { j = 1 } ^ { e _ { i } ^ { * } } r _ { i , j } ^ { * } ) } } )$ in layer range $1 \sim 8$ of FFN, which includes gate-, up-, and down-projection. Based on Fig. 3 and Fig. 4, we draw the conclusion (see similar observations on other models and tasks in Appendix H): Figure 5: Distribution of ED scores computed by all the modules on $\mathrm { L L a M A _ { 7 B } }$ , LLaMA- $2 _ { 7 \mathrm { B } }$ , and LLaMA- $3 _ { 8 \mathrm { B } }$ under three NLU tasks. Insight 3. MHA requires more expert numbers and ranks in bottom and top layers, whereas FFN shows this trend mainly in the middle and top layers. Expert diversity We also explore Expert Diversity (ED) by quantifying it as the ratio between the size of the largest subset of experts whose ranks are all mutually distinct and the total number of experts $\begin{array} { r l } { \mathbf { \Psi } ( \mathrm { E D ~ } = } & { { } } \end{array}$ largest rank-distinct subset / all experts ). For example, consider the FFN’s up-projection module, which contains five experts with ranks [3, 5, 6, 3, 7], so the expert diversity score $\mathrm { E D } =$ $4 / 5 = 0 . 8$ . In Fig. 5, we analyze the ED score for each submodule across NLU benchmarks on $\mathrm { L L a M A _ { 7 B } }$ , LLaMA- $2 _ { 7 \mathrm { B } }$ , and LLaMA- $\cdot 3 _ { \mathrm { 8 B } }$ . The results show that $3 8 . 1 \%$ of the ED scores fall within the high range of $0 . 7 5 \sim 1 . 0 0$ , whereas only $8 . 7 \%$ are in the low range of $0 . 0 0 \sim 0 . 2 5$ . Based on this observation, we draw the following conclusion: Insight 4. Allocating diverse expert ranks enables more flexible and specialized adaptation to different tasks. Impact of Task Difficulty We aim to investigate how the expert number $e ^ { * }$ and rank $r ^ { * }$ derived by GuiLoMo differ when facing challenging tasks compared to simpler ones. In pursuit of this goal, we use two BBH (Suzgun et al., 2023) sub-tasks, Tracking Shuffled Objects and Logical Deduc$\mathbf { t i o n } ^ { 5 }$ , each consists of sub-tasks differing in the number of objects $K$ involved, with difficulty increasing as the number of objects grows. Table 6: The average number of assigned experts across all modules (“module” here refers to all weight matrices where LoRA-MoE is applied, i.e., $\mathbf { W } ^ { Q } , \bar { \mathbf { W } } ^ { K } , \mathbf { W } ^ { V } ,$ $\mathbf { W } ^ { O }$ in MHA and $\mathbf { W } ^ { U }$ , $\mathbf { W } ^ { D }$ , $\mathbf { W } ^ { G }$ in FFN) and the average rank across all experts, calculated under different object numbers within the same task. From Table 6, we observe that as the number of objects increases, the number of experts assigned to different sub-tasks scales proportionally with the number of elements. However, the rank does not exhibit such a trend. Hence, we derive the following insight: Insight 5. Within the LoRA-MoE, harder tasks benefit more from adding experts than from raising the rank of each LoRA expert. # 5 Related work LoRA-MoE Framework Recent research explores the integration of MoE (Shazeer et al., 2017) and LoRA (Hu et al., 2022), referred to as LoRAMoE, to boost performance of LLMs in both singletask and multi-task scenarios in an efficient manner (Wu et al., 2024; Gao et al., 2024; Qing et al., 2024; Dou et al., 2024; Liu et al., 2023; Luo et al., 2024). For instance, Dou et al. (2024) leveraged LoRA-MoE to reduce catastrophic forgetting problem during supervised fine-tuning. Wu et al. (2024) uitlized LoRA-MoE with a hierarchical gating mechanism for efficient fusion across NLP and Vision & Language tasks. However, existing works merely allocate expert numbers and ranks uniformly for LoRA-MoE, failing to fully exploit its potential. Allocation Strategy for LoRA-MoE To exploit the potential of LoRA-MoE, Gao et al. (2024) revealed that higher layers require more LoRA experts and initialized LoRA-MoE with different numbers of experts with group-wise allocations. Moreover, Qing et al. (2024) leveraged Heavy Tailed Self-Regularization (HT-SR) theory to develop a training-free and theoretically grounded method for allocating suitable expert numbers for LoRA-MoE. However, previous methods only consider the expert number while overlooking the expert rank, which results in all experts having the same capacity and thus lacking diversity. In contrast, our proposed GuiLoMo jointly optimizes both the expert number and rank.
Parameter-efficient fine-tuning (PEFT) methods, particularly Low-Rank Adaptation (LoRA), offer an efficient way to adapt large language models with reduced computational costs. However, their performance is limited by the small number of trainable parameters. Recent work combines LoRA with the Mixture-of-Experts (MoE), i.e., LoRA-MoE, to enhance capacity, but two limitations remain in hindering the full exploitation of its potential: 1) the influence of downstream tasks when assigning expert numbers, and 2) the uniform rank assignment across all LoRA experts, which restricts representational diversity. To mitigate these gaps, we propose GuiLoMo, a fine-grained layer-wise expert numbers and ranks allocation strategy with GuidedSelection Vectors (GSVs). GSVs are learned via a prior bilevel optimization process to capture both model- and task-specific needs, and are then used to allocate optimal expert numbers and ranks. Experiments on three backbone models across diverse benchmarks show that GuiLoMo consistently achieves superior or comparable performance to all baselines. Further analysis offers key insights into how expert numbers and ranks vary across layers and tasks, highlighting the benefits of adaptive expert configuration. Our code is available at https://github.com/Liar406/Gui-LoMo.git.
[ "cs.CL" ]
# 1. Introduction The current literature on temporal data analysis is divided into two main branches: one, mainly considering numerical fluctuations in different dimensions as Multivariate Time Series (MTS) [16,26,19,32], and the other focusing on a characterization as a linear sequence of discrete events, be they durative[3,28,5] or pointwise [15,9]. The first class denotes the classification task as Multivariate Time Series Classification (MTSC), which requires capturing behavioural correlations between different dimensions [29]. Crossdimension correlations can be represented via convolutions [32] or kernels that potentially squash such correlations to a single numerical value [16]. Despite the possibility of doing so, the explanations thus obtained are not easily explainable to humans. Furthermore, recent results have shown that these solutions cannot capture such correlations when considering complex and multifactorial problems, such as clinical ones, and give results with low accuracy [14]. These considerations require us to solve the problem from a different angle. Recent approaches are starting to bridge the numerical with the event-driven characterization of temporal data by clearly showing the possibility of discretizing Data Trends (DT) occurring into durative constituents [20]. Notwithstanding this, these events cannot summarize numerical information precisely characterizing the trend of choice, thus only characterizing growth, variability, or decrease patterns without providing any further numerical references to better describe the event of choice. This demands a more general representation of DT patterns, which should also encompass numerical features, albeit summarized (e.g. Catch22 [25]), to characterize the underlying data from a more data-driven perspective. Within the second branch, Business Process Mining [1,15] enables, through declarative process mining, to temporally correlate single pointwise events through a humanreadable data description while bridging across different activity labels (distinguishing different action/event types) occurring. This literature premiered the extraction of declarative specifications from event-based data while enriching information concerning temporal correlations with additional numerical [23] and categorical [12] data predicates. Despite the inherent ability of such techniques to provide a human-explainable characterization of the data, the former techniques cannot be straightforwardly applied to solve the MTSC problem, even after DT-discretization. First, despite the possibility of representing temporal specifications as concurrent ones [1], these are always interpreted as a temporally-ordered succession of non-concurrent and non-overlapping events [13], thus invalidating the possibility of directly exploiting the same algorithms for capturing multiple events co-occurring at the same time across dimensions. In fact, despite the possibility for the current XES [2] standard to express durative events, this cannot clearly represent and group together co-occurring events within the same trace across dimensions. Second, as such techniques often involve small and curated datasets, they have been proven several times as not scalable over real-world ones [9]. At the time of the writing, the most efficient temporal specification miner, Bolt2 [9], extracts data-less temporal clauses and, therefore, does not consider data payload information associated with each activity label, as required after extending DT with data payloads. As this was the one used in our previous work to characterize correlations across temporal dimensions after a minor extension for supporting durative and co-occurring events [14], the extraction of a declarative dataful specification requires to expand the latter algorithm further also to enable the extraction of dataful features from the declarative clauses. This paper aims to bridge all approaches above to achieve a generally explainable sequential learning explanation through EMeriTAte $\scriptstyle + \mathrm { D F } ^ { 4 }$ . The proposed technique works by first discretizing MTS into machine-readable polyadic logs through dataful DT mining (Sect. 4.1), for then extracting declarative descriptions of the previously-mined representation while pertaining dataful features (Sect. 4.2). We then use such clauses to derive the features satisfied, violated, or vacuously satisfied by the traces, for then deriving a propositional representation of the classes associated with arbitrary MTS segments through white-box classifier explanation extraction (also Sect. 4.2). By doing so, each temporal class is then defined through the satisfaction of a proportional formula, where predicates state temporal properties of the data. This approach further generalises pre-existing deviance mining solutions, which only encompass the conjunctive-based characterization of such temporal classes [12]. This constitutes a major improvement over our previous solution Explainable MultivariatE coRrelatIonal Temporal Artificial inTElligence (EMeriTAte) [14], where (i) the original implementation of DT mining was not extracting a numerical summarised description of the data and (ii) was using a straightforward brute-force algorithm for considering all the possible sliding windows of all possible sizes; (iii) the extraction of the declarative clauses for each polyadic log (Sect. 3.1) did not consider dataful clauses, and $( i \nu )$ the generation of the embedding from the mined clauses was regarded as a distinct phase from the one where the trace embeddings were generated as in previous literature [12], thus adding significant computational overhead. The application of the aforementioned changes to our previous solution enables us to design its dataful variant (EMeriTAte $+ \mathrm { D F }$ ) outperforming EMeriTAte in terms of efficiency, accuracy, precision, and recall. Last, we extend our experimental set-up to consider other MTS datasets, thus including univariate ones [22,18] and others related to general human mobility, thus remarking the inherent difficulty of characterizing dyskinetic/off events in Parkinson’s Disease patients. # 2. Related Works # 2.1. Hybridly Explainable Artificial Intelligence The observations from which the utility of defining a hybrid explainable AI is deduced have their origins in the observations deduced from the formalization of verifiable AI systems. At the basis of this framework [30], we always consider a model of the system to be verified $( { \mathfrak { S } } )$ , a model of the environment $( { \mathfrak { E } } )$ , and the formal property to be verified $( \varphi )$ , where all of these components might be expressed in logical form, so to provide correctness guarantees. Specification mining algorithms [9] extract $\varphi$ from given system or environment of interest ${ \mathfrak { S } } \vee { \mathfrak { E } }$ , where deviance mining algorithms [12] are a specific case of the former, assuming $( i )$ that environment states can be labelled and $( i i )$ that we can extract one single specification per environment label, which provides the characterizing behaviour distinguishing a class from the others.. A later survey [11] generalised the former into hybridly explainable artificial intelligence after observing that we can always transform unstructured data representations into more structured and logical-driven by pre-processing the data into an a priori phase, which can include both data cleaning capabilities [4] and specification mining ones by enriching structured data representation with contextual interpretations of the raw data. After this, we can carry out the actual learning phase, in which ad hoc explainability will heavily depend on the methodology and the data-representation format of choice. Last, we can consider precision/recall metrics as well as model extractions from both white-box and black-box classifiers as ex post explanations of the learning phase. This paper will focus on providing algorithmic solutions for the first two phases, as narrated in the text. To better explain the former framework, we will categorize the forthcoming sections by pigeonholing curring literature over the aforementioned framework. # 2.2. A Priori Explanability Numerical Data. DT specifications (Fig. 1a) provide a $\mathrm { L T L } _ { \mathrm { f } }$ characterization for numerical trends in (univariate) TS $X$ after propositional discretization within a mining phase [20]: this requires a preliminary pre-processing step for which $X$ is associated with durative constituents $\varsigma _ { j + 1 } ^ { X }$ with activity labels $X ^ { \mathrm { { c } } }$ or $X ^ { \lnot \subset } \colon \varsigma _ { j + 1 } ^ { X }$ has activity label $X ^ { \mathrm { { c } } }$ Activity Label Time Series (TS) Trace $( A ^ { \mathrm { i } } , A ^ { \lnot \mathrm { i } } )$ IncreaseRapidly(Ai) Ai, . . . , Ai IncreaseSlowlyI(Ai) A¬i, Ai, . . . , Ai IncreaseSlowlyII(Ai) $A ^ { \scriptscriptstyle \mathrm { i } } , A ^ { \scriptscriptstyle \mathrm { - i } } , A ^ { \scriptscriptstyle \mathrm { i } } , . ~ . ~ . , A ^ { \scriptscriptstyle \mathrm { i } }$ IncreaseSlowlyIII(Ai) $A ^ { \scriptscriptstyle \mathrm { i } } , \ldots , A ^ { \scriptscriptstyle \mathrm { i } } , A ^ { \scriptscriptstyle \mathrm { - i } } , A ^ { \scriptscriptstyle \mathrm { i } }$ IncreaseSlowlyIV(Ai) $A ^ { \scriptscriptstyle \mathrm { i } } , \ldots , A ^ { \scriptscriptstyle \mathrm { i } } , A ^ { \scriptscriptstyle \ l \ l - \mathrm { i } }$ HighVolatilityI(Ai) $A ^ { \neg \mathrm { i } } , A ^ { \mathrm { i } }$ HighVolatilityII(Ai) $A ^ { \ l \ l - \mathrm i } , A ^ { \mathrm i } , A ^ { \ l \ l - \mathrm i } , A ^ { \mathrm i }$ HighVolatilityIII(Ai) $A ^ { \ l \ l \ l ^ { - 1 } } , A ^ { \ l \mathrm { i } } , \ l \ l \ l , A ^ { \ l \mathrm { i } } , A ^ { \ l \ l \ l \mathrm { - i } }$ HighVolatilityIV(Ai) $A ^ { \scriptscriptstyle \mathrm { i } } , A ^ { \scriptscriptstyle \lnot \mathrm { i } } , \ldots , A ^ { \scriptscriptstyle \lnot \mathrm { i } } , A ^ { \scriptscriptstyle \mathrm { i } }$ HighVolatilityV(Ai) $A ^ { \mathrm { i } } , A ^ { \lnot \mathrm { i } } , A ^ { \mathrm { i } } , A ^ { \lnot \mathrm { i } }$ HighVolatilityVI(Ai) $A ^ { \mathrm { i } } , A ^ { \lnot \mathrm { i } }$ DecreaseSlowlyI(Ai) $A ^ { \ l \ l \ l ^ { \ l - \mathrm { i } } } , \ l \ l \ l \ l \ l ^ { \ l } \ l \ l ^ { \ l } , A ^ { \ l \ l \ l \ l ^ { \ l - \mathrm { i } } } , A ^ { \mathrm { i } }$ DecreaseSlowlyII(Ai) A¬i, . . . , A¬i, Ai, A¬i DecreaseSlowlyIII(Ai) A¬i, Ai, A¬i, . . . , A¬i DecreaseSlowlyIV(Ai) Ai, A¬i, . . . , A¬i DecreaseRapidly(Ai) A¬i, . . . , A¬i (a) Expressing DT activity labels as TS silhouettes and pointwise constituents [20]. Fig. 1: Declarative Languages for a priori explanability over (a) TS or $( b )$ non-polyadic logs. (b) Our dataful DECLARE [27] subset of interest, where $A$ (respectively, $B$ ) denote activation (resp., target), and $p$ (resp., $\boldsymbol q )$ denote the associated dataful payload condition. conditions. Dataless variants can be expressed with $p$ ( $\mathrm { \Delta } ^ { \cdot } p ^ { \prime }$ and $q$ ) as true. †remarks clauses subject to dataful refinement in the present paper. $( \lambda ( \varsigma _ { j + 1 } ^ { x } ) = X ^ { \subset } )$ if $X ( j + 1 ) > X ( t )$ and $X ^ { \lnot \subset }$ otherwise. We can then determine the occurrence of the pattern by simply joining constituents $X ^ { \mathrm { { c } } }$ and $X ^ { \lnot \subset }$ into durative ones referring to the same temporal span, which can then be matched and rewritten into a durative event having the activity label described in the first column of Fig. 1a. Authors mainly use this declarative representation for TS forecasting purposes, but give no evidence for exploiting this in the context of MTS while correlating disparate DT across MTS variables, which might be expressed through DECLAREd. To achieve this, our previous work premiered their combination. Although DT discards the numerical information associated with such trends, it generalises the common shapelet approach [19] to arbitrary growth and decrease patterns expressed declaratively with human-understandable patterns. Event Data. DECLARE [27] (Fig. 1b) is a log temporal declarative language considering possible temporal behaviour in terms of correlations between activated events and targeted conditions, where the former refers to the necessary but not sufficient condition for satisfying a clause. Both activation and target conditions are defined through activity labels referring to a specific action of interest; these can be enriched with dataful propositional formulæ, predicating payload conditions over the activated constituents. Unless stated otherwise, if the clause has no activation, it is trivially satisfied (vacuously). E.g., Precedence might also be vacuously violated due to the presence of just target conditions [9]. DECLARE clauses can be instantiated over a specific alphabet $\Sigma$ referring to activity labels associated with constituents and they can be composed to generate conjunctive specifications $\boldsymbol { \varPhi } = \{ c _ { 1 } , \ldots , c _ { l } \}$ ; we say that a trace satisfies a conjunctive specification if it satisfies all its clauses. These conjunctive specifications can be extracted from temporal data represented as a (non-polyadic) log by efficiently exploiting a specific-togeneral lattice search from which we can preventively prune the generation of specific clauses [9]. Bolt2 [9] mines non-polyadic logs to extract conjunctive DECLARE specifications, where events are non-durative and where every single event contains one and only one constituent. This algorithm achieves efficient specification mining by testing DECLARE dataless clauses where the most general clause is tested first, followed by the ones enabling the other. This generates a lattice of DECLARE clauses: the clause search stops by adopting a quality metric, thus ensuring the capture of the most specific behaviour of the single log without returning underrepresented ones as not generalizing over the data. This solution does not contemplate any further refinement of the clauses into dataful ones so to associate non-trivially true data predicates to either the activation, target, or correlation conditions. Other specification mining algorithms overcome this limitation [23] and, despite not using an efficient general-to-specific visit as the former, consider clustering over the activation conditions for deriving the data’s propositional representations. This consideration was blindly adopted in DML [12] to refine each mined declare clause to generate dataful clauses differentiating traces belonging to different classes. This poses a significant limitation in this scenario, as the mining approach was not initially designed for this purpose, thus not necessarily helping to refine the activation conditions according to the class associated with each single activation or target condition. This paper proposes a completely different approach, where white-box classifiers are used to characterize the activation (and target) predicates with the clear intent to capture distinct class behaviours and not merely aggregate data by shared data features, which might not be necessarily related to a class of interest. # 2.3. Ad Hoc Explanability Numerical Data. This subsection describes the main competitors considered from this paper for EMeriTAte+DF, as they constitute MTS classifiers. (KNN-based) Clustering for time series [21] works by first identifying time series clusters depending on a distance function of choice and then associating to each cluster the majority class; this induces the possibility of boiling down a clustering problem to a classification one by associating each numerical-based representation to the element being the most similar to the majority class similarly to [7]. A straightforward Euclidean metric can determine the pointwise distance of the MTS across dimensions (E-KNN). The main drawback of this is that it neither considers the evolution of DTs within the MTS nor allows for aligning similar trends being displaced in time, similar to dynamic time warping. Furthermore, no efficient mechanisms are designed to efficiently test different patterns at stake by considering a hierarchy of temporal patterns, one being the generalization of the other. Doing so can greatly boost the mining task while increasing the overall number of temporal patterns of interest. Rocket [16] exploits randomized convolutional kernels to extract relevant features from the MTS, which are then fed to a linear classifier by associating such features to a class of interest. While this feature extraction approach cannot adequately capture DTs and variations across dimensions, it also guarantees scarce explainability as information is summarised into kernel values. TapNet [32] improves the latter by exploiting attention networks for selecting the best classification features while still heavily relying on upfront feature selection mechanisms such as dimensionality reduction, which might lose relevant information to the detriment of the classification precision and accuracy. CANONICAL INTERVAL FOREST (CIF) [26] achieves explainability by exploiting white-box classifiers such as decision trees over catch22 features [25] describing a selection of time intervals describing different types of numerical variations rather than changes in trends and data patterns and their temporal correlations. SHAPELET TRANSFORM CLASSIFIER (STC) [19] characterize MTS in terms of distinctive temporal features, the shapelets, being frequently occurring trends occurring across all TS, for then describing each MTS in terms of their distance with each selected shapelet. Notwithstanding this approach considers numerical features that are completely discarded through DT, it fails to establish correlations across trends occurring in different dimensions, thus not establishing correlations across differently observed behaviours. Event Data. DEVIANT MODEL LEARNING (DML) [12] extracts specifications for trace classes $y \in \mathcal { Y }$ through declarative (e.g., DECLARE clauses) and procedural (i.e., association rules) features previously mined from the data. Each trace can be represented as an $n$ -dimensional embedding $\vec { x } _ { \sigma } \in \mathbb { R } ^ { n }$ , where $n$ is the overall number of features. E.g., when a Declare clause $c _ { \ell }$ is chosen as the $\ell$ -th feature, a value of $\vec { x } _ { \sigma } [ \ell ] = - 1 / 0 / n$ denotes that the trace $\sigma$ does not satisfy, or vacuously satisfies, or satisfactorily activates $c _ { \ell } n$ times. For the features providing declarative characterizations, DML extracts those via the ADMS specification mining algorithm [9]. Our previous approach, EMeriTAte [14], generalised over the former algorithm in its ad hoc phase by considering a different log model, where each event (and not just a single trace) is associated with a class and where each event is composed by different constituents representing concurrent activities starting at the same time and with different durations. These characteristics also enable the Event Data algorithms to consider numerical-driven problems with different dimensions. At the same time, previous temporal declarative-driven approaches were limited to only considering univariate TS [17], and were only used for verification tasks, but not for specification mining ones. We further expand on this data model to consider multiple taxonomies of activity labels for considering which events might be regarded as sub-events of one more general type (Sect. 3.1), so as to reduce better the amount of relevant correlations (Sect. 4.2). # 3. Problem Statement We denote $\mathbb { B } ( b )$ as the function mapping true values for $b$ to 1 and returning $- 1$ otherwise. Given a function $F$ having as a domain a closed interval in $\mathbb { N }$ , the set $\mathbb { I } _ { F } ( [ B , E ] )$ of the maximal intervals in $[ B , E ] \subseteq \mathbb { N }$ is a set of largest non-overlapping intervals in $[ B , E ]$ sharing the same value for $F$ where all the remaining intervals will have different values in $F$ : $$ \begin{array} { c } { { \mathbb { I } _ { F } ( [ B , E ] ) = \left\{ [ b , e ] \ : | \ : B \leq b \leq e \leq E , ( b > B \Rightarrow F ( b ) \neq F ( B ) ) , \right. } } \\ { { \left. ( e < E \Rightarrow F ( e ) \neq F ( E ) ) , \forall b \leq \tau \leq e . F ( \tau ) = F ( b ) = F ( e ) \right\} } } \end{array} $$ Given a MTS $T$ , we refer to its size $| T |$ the number of events recorded counting from 1, thus $\mathsf { d o m } ( T ) = \{ 1 , \dots , | T | \}$ . When $T \colon { \mathbb { N } } \to { \mathbb { R } } ^ { d }$ is clear from the context, given a dimension $x \leq d$ , we use $x ( \left| t \right| )$ as a shorthand for $T ( t ) ( \boldsymbol { x } )$ . Given a time interval $( i , j )$ and a MTS $T$ of at least size $j$ , weL dMenote $T [ i , \ldots , j ]$ as its projection considering a subsequence of the events occurring within this interval: $T [ i , \dots , j ] ( x ) = T ( x + i - 1 ) { \bf i f } x + i - 1 \in$ $\mathrm { d o m } ( T )$ and $1 \leq x \leq j - i + 1$ . Given a MTS $T _ { \mathfrak { E } }$ within a training dataset $D$ where c represents the time-wise classification and a set of maximal and non-overlapping temporal subsequences $[ i , j ] \in \mathbb { I } _ { T _ { \mathfrak { E } } ( \mathfrak { c } ) } ( [ 1 , | T | ] )$ targeting the same event time class value for dimension c, we want to learn a function $h$ minimising the classification error $\begin{array} { r } { \sum _ { i \leq \tau \leq j } \left| h ( T _ { \mathfrak { E } } [ i , \dots , j ] ) - T _ { \mathfrak { E } } ( \tau ) ( \mathsf { c } ) \right| } \end{array}$ for each maximal interval $[ i , j ]$ and from all $T _ { \mathfrak { E } }$ in $\bar { D }$ . This is a more general formulation than the one posed by current literature, considering time series where $\mathbb { I } _ { T _ { \mathfrak { E } } ( c ) } ( [ 1 , | T | ] ) = \{ [ 1 , | T | ] \}$ , where one classification label is associated to each timestamp. # 3.1. Polyadic Logs A polyadic log $\mathfrak { S }$ is a pair $\langle \mathbb { G } , \mathcal { L } \rangle$ , where $( i ) \ \mathcal { L }$ is a collection of distinct polyadic traces $\{ \sigma ^ { i } , \ldots , \sigma ^ { n } \}$ referring to the auditing of a specific environment ${ \mathfrak E }$ of interest (e.g., a MTS); each trace is a list of temporally ordered polyadic events $[ \sigma _ { 1 } ^ { i } , \dots , \sigma _ { o } ^ { i } ]$ , where each of these ${ \boldsymbol { \sigma } } _ { j } ^ { i }$ is defined as a pair of a set of durative constituents and a class $\sigma _ { j } ^ { i } = \langle \{ \varsigma _ { j , 1 } ^ { i } , \cdot \cdot \cdot , \varsigma _ { j , m } ^ { i } \} , \mathsf { c l a s s } _ { \rangle }$ ⟩, where all constituents start at the same time $j$ but possibly with different activity labels and duration span; in particular, each durative constituent $\varsigma _ { j , k } ^ { i }$ is expressed as a triplet $\langle \mathsf { a } , p , \mathcal { s } \rangle$ where a is the activity label, $p$ is the payload collecting the raw data being associated to the durative constituent, and s denotes its temporal span s.t. $s \leq n$ . In this, we denote with $\lambda , \varpi$ , and $\delta$ the function extracting the activity label $( \lambda ( \varsigma _ { j , k } ^ { i } ) = \mathsf { a } )$ , its payload $( \mathcal {varpi } ( \varsigma _ { j , k } ^ { i } ) = p )$ , and its span $( \delta ( \varsigma _ { j , k } ^ { i } ) = \mathscr { s } )$ . We denote $\kappa ( \varsigma _ { j , k } ^ { i } )$ as the function concatenating all of this information in a finite function $\varpi ( \varsigma _ { 1 , k } ^ { \iota } ) \circ [ \_ { { \mathrm { 1 a b e l } } } \mapsto \lambda ( \varsigma _ { 1 , k } ^ { \iota } )$ , __span $\mathrm { ~ ~ \Omega ~ } _ { 1 } \mapsto \delta \big ( \varsigma _ { 1 , k } ^ { \iota } \big ) \big ]$ Last, $( i i ) \mathbb { G }$ is a collection of taxonomies [10] represented as Direct (Acyclic) Graphs $G _ { i } = \langle N _ { G } , R _ { G } , \ell _ { G } \rangle$ rooted in $\ell _ { G }$ determining the relationship between activity labels within the $\log { \mathfrak { S } }$ ; the entities are represented as a set of nodes $N _ { G }$ , and $R _ { G }$ is the set of the is-a relationships. # 3.2. Poly-DECLARE Polyadic traces trivially violate the temporal non-simultaneity assumption from non-polyadic logs [8] for which each non-polyadic event cannot be associated with multiple activity labels, while now one single polyadic event might contain constituents with different activity labels. As a result, the former algorithms did not consider concurrent violation conditions: e.g., for Precedence(A,C), we now also prescribe that a C-labelled constituent shall also never co-occur with an A-labelled constituent (cfr. the occurrence of $\leq$ , Algorithm 4). Achieving all these desiderata requires completely revising the previous algorithms to be adapted to the novel log assumptions (see Algorithm 5). Given that each event might contain multiple constituents, we extend the traditional DECLARE semantics by checking whether some or all the constituents activated within a specific event satisfy the given declarative clause. We retain the former as the default interpretation of the declared clauses, thus retaining the same template name while, for indicating the others, we explicitly append a All prefix to the names outlines from Fig. 1b. # 4. EMeriTAte+DF This section introduces the algorithmic extensions moving from EMeriTAte to its DATAFUL (DF) variant (EMeriTAte $+ D F _ { , }$ ). Unlike our previous contribution, we characterize our algorithms as Explainable and Verified AI. While the A Priori phase adds more contextual information on the data series, the Specification Mining and the White-Box model learning for learning the explainable specifications from the data is carried out in the Ad Hoc phase. The evaluation section provides the Post Hoc evaluation of the trained models under different datasets using the default metrics. # 4.1. A Priori Explainability The a priori explainability phase was improved from [20] as Algorithm 1by adequately indexing MTS to support faster DT mining algorithms while also simplifying over the redundant patterns from Fig. 1a. Differently from the previous, we extend each constituent having DT activity labels, once represented as a dataless durative constituent, with Catch22 features as data payloads for the mined durative constituents. Loading and Indexing. The first step is to load the MTS discretized as a preliminary pointwise log of events expressing the verification or not of a specific numerical condition $x$ expressing different types of numeric variations (Table 1) such as increase (i), absence (a), stationarity (s), and variability (v), and by creating pointwise events satisfying or not some associated condition $P _ { x } ( t )$ . This will then be used to mine the DT patterns. Table 1: Point-wise predicates $P _ { x } ^ { \tau }$ for discretizing time series $\tau$ over dimension $i$ and representing those as values $\nu a l _ { x } ^ { \tau }$ We perform a linear scan of each multivariate time series $T _ { \mathfrak { E } }$ so to identify the maximal intervals $[ B , E ] \ \in \ \mathbb { I } _ { T _ { \mathfrak { E } } ( \cdot ) ( \mathfrak { c } ) } ( [ 1 , | T _ { \mathfrak { E } } | ] )$ associated with the same class reported in dimension c (Line 2), thus considering a projection $\tau$ for each of these $[ B , E ]$ . For each dimension of interest (Line 4), we discretize each dimension and timestamp as in our previous paper [14] by considering numerical variations (Line 5). All such constituents are then stored in events (Line 9) contained in a log ${ \mathfrak { S } } _ { x } ^ { i , \tau }$ for each dimension $i$ , segment $[ B , E ]$ as $\tau$ , and numerical variation $x$ (Line 6). Next, we index the previously-discretized logs according to the satisfaction (Line 10) or violation (Line 11) of the predicate $P _ { x } ^ { \tau }$ . We ensure that this information is computed only once by identifying regions of a specific time series’ dimension where numerical variation conditions continue to hold or not. As the intervals in such indices are never overlapping, we can store the intervals in order of increasing beginning time and query those in $O ( \log | \mathcal { G } _ { x , - } ^ { i , \tau } | )$ to check whether the time series at dimension $i$ satisfies the numerical variation condition $x$ at a specific time $t$ . Hence, we denote and define such query as $t \in \mathcal { G } _ { x , - } ^ { i , \tau } \Leftrightarrow \exists [ b , e ] \in \mathcal { G } _ { x , - } ^ { i , \tau }$ . $t \in [ b , e ]$ : this notion is used in the forthcoming subsection. Lemma 1 (Indexing and Loading Time Complexity). Given a collection of $N$ MTS of $d$ dimensions with maximum length $t _ { : }$ , the time complexity is in $O ( N d t )$ . Proof. Given that all insertions on indices can be built while linearly scanning the data, and assuming that each $P _ { x } ^ { \tau } ( x )$ can be tested in constant time, the time complexity of scanning each maximal time interval $[ B , E ]$ and each value in it boils down to the maximal time series length $t$ . Given that the number of point-wise predicates is constant, we can conclude that the linear scan is provided for each dimension and MTS within the collection, thus leading to $O ( N d t )$ . □ Indexed DT Mining. This phase enables the representation of a time series as a polyadic trace, where the first constituent for each $j$ -th event reflects the values associated with # Algorithm 1 A Priori Explainability each dimension at time $j$ , i.e. $T _ { \mathfrak { E } } ( j )$ (Lines 43-44), while the remainder is derived from the DT-mined pointwise constituents from the previous phase. The composition of the $\mathcal { G }$ -indexed intervals generate DT patterns distinguishable from a constituent activity label (Line 12), where the name of the DT pattern reports the dimension $i$ and the specific numerical variation $x$ . The algorithm proceeds by scanning each maximal interval $[ B , E ]$ for a time series $T _ { \mathfrak { E } }$ over each dimension $i$ , numerical variation $x$ , and $\mathcal { G }$ -indexed satisfiability interval $[ b , e ]$ (Lines 49-50). The segmentation of the time series through the $\mathcal { G }$ indices allows to promptly derive that any subsequent or preceding interval within the same dimension will violate some $x$ constraints if the current interval satisfies them, and vice versa. Differently from our previous paper, for each mined constituent referring to a specific dimension $i$ and interval $[ \beta , \eta ] \subseteq [ b , e ]$ , we generate a payload containing the Catch22 [25] features (C22) capturing the dynamic properties of the $i$ -th dimension in $\tau ^ { i } [ \beta , \eta ]$ , which are paired with the timestamp fluctuations. By doing so, we generalise over the CIF classifier [26], which focuses on distinguishing time series from the behaviours occurring within a specific fixed-size sliding window while potentially considering the features for all possible sliding windows captured as DT patterns. The computation of the Catch24-payload is parallelized due to the high computational nature of the metrics used to describe the numerical fluctuations of the data. The main mining algorithm considers a subset of all the DTs patterns described in Section ??, where some high volatility patterns are discarded so to favour the recognition of shorter volatility patterns rather than preferring joining prolonged variations. Overall, This helps to reduce the number of patterns to be returned through the combination of subsequent patterns while reducing the time to mine the clauses without considerably undermining the predictive power of the classifier. At the end of the process, each time series $T ^ { \mathfrak { E } }$ is discretized into a single polyadic trace $\sigma ^ { \mathfrak { E } }$ . # Lemma 2. The worst-case time complexity for the indexed DT mining is superpolynomial over the maximum length of the segmented interval s and linear over the number and size of the MTS. Proof. Note that the worst-case scenario time complexity does not happen for MTS exhibiting stable values, as this would lead to just one maximal interval $[ B , E ]$ per class. In fact, these situations have $\xi$ and the condition at Line 26 as false, as there will be no 12: function $\mathrm { D L } ( \ell , \nu ; x , i )$ 13: if $\ell = \mathrm { S }$ then return ( $\dot { \nu }$ ? IncreaseRapidly : DecreaseRapidly) $) + ( \mathsf { d i m } _ { i } ^ { x } )$ 1 14: if $\ell = \mathrm { H V } 4 3$ then return ( $\dot { \nu }$ ? HighVolatility3 : HighVolatility4)+(dimx) 15: if $\ell = 1 \mathtt { H }$ then return ( $\dot { \nu }$ ? IncreaseSlowly1 : DecreaseSlowly4)+(dimix) 16: if $\ell = 2 \mathrm { H }$ then return ( $\dot { \nu }$ ? IncreaseSlowly2 : DecreaseSlowly3)+(dimx) 17: if $\ell = \mathtt { E 1 }$ then return ( $\dot { \nu }$ ? IncreaseSlowly3 : DecreaseSlowly2)+(dimix) 18: if $\ell = \mathtt { E } 2$ then return ( $\dot { \nu }$ ? IncreaseSlowly4 : DecreaseSlowly1) $+ ( \mathsf { d i m } _ { i } ^ { x }$ ) 19: procedure DTMINESTEP $( \tau , i , x , \nu , [ b , e ] ; \mathfrak { E } )$ 20: $n \gets e \_ b + 1$ 21: yield ${ \mathsf { \mathsf { C } } } _ { b , \mathsf { f r e s h } ( b ) } ^ { \mathfrak { E } } : = \langle \mathbf { D L } ( \mathsf { S } , \nu ; x , i ) , \mathbf { C } 2 2 ( \tau ^ { i } ) \circ \mathbf { C } 2 2 ( [ b , e ] ) , n \rangle$ 22: for all $1 \leq s \leq n$ do 23: for all $b \leq \beta \leq e - s + 1$ do 24: $\xi \gets ( e + 1 ) \in \mathcal { G } _ { x , \mathbf { n o t } \nu } ^ { i , \tau } \mathbf { a n d } \beta = e ; \eta \gets \beta + s - 1$ 25: if $\beta = b$ then 26: if $( b - 1 ) \in \mathcal { G } _ { x , \mathbf { n } \mathbf { o t } \nu } ^ { i , \tau }$ then 27: yield $\begin{array} { r l } & { \frac { \varepsilon } { 3 } _ { - 1 , \mathrm { f r e s h } ( \beta - 1 ) } : = \langle \mathrm { D L } \big ( 1 \mathrm { H } , \nu ; x , i \big ) , \mathrm { C } 2 2 \big ( \tau ^ { i } [ \beta - 1 , \eta ] \big ) \circ \mathrm { C } 2 2 \big ( [ \beta - 1 , \eta ] \big ) , \mathscr { s } + 1 \rangle } \\ & { \frac { \mathbf { n } } { \mathscr { s } - 1 , \mathrm { f r e s h } ( \beta - 1 ) } : = \langle \mathrm { D L } \big ( \mathrm { H V } 4 3 , \nu ; x , i \big ) , \mathrm { C } 2 2 \big ( \tau ^ { i } [ \beta - 1 , \eta + 1 ] \big ) \circ \mathrm { C } 2 2 \big ( [ \beta - 1 , \eta + 1 ] \big ) , \mathscr { s } + 2 \rangle } \\ & { 2 \rangle \in \mathcal { G } _ { x , v } ^ { i , \gamma } \ \mathrm { t h e n } } \\ & { \frac { \varepsilon } { \beta - 2 , \mathrm { f r e s h } ( \beta - 2 ) } : = \langle \mathrm { D L } \big ( 2 \mathrm { H } , \nu ; x , i \big ) , \mathrm { C } 2 2 \big ( \tau ^ { i } [ \beta - 2 , \eta ] \big ) \circ \mathrm { C } 2 2 \big ( [ \beta - 2 , \eta ] \big ) , \mathscr { s } + 2 \rangle } \end{array}$ 28: if $\xi$ the 29: yield 30: if 31: yiel 32: if $\beta + s - 1 = e$ and $\xi$ then 33: found $$ false 34: if $[ e + 1 , e + 2 ] \in \mathcal { G } _ { x , \mathbf { n o t } \nu } ^ { i , \tau }$ and $e + 2 \in \mathcal { G } _ { x , \nu } ^ { i , \tau }$ then 356: 37: fyoieulnd $\begin{array} { r l } & { \mathbf { u } \underset { \leq \beta , \operatorname { f r e s h } ( \beta ) } { } : = \langle \mathbf { D L } ( \operatorname { E 1 } , \nu ; x , i ) , \mathbf { C } 2 2 ( \tau ^ { i } [ \beta , \eta + 2 ] ) \circ \mathbf { C } 2 2 ( [ \beta , \eta + 2 ] ) , \boldsymbol { \mathscr { s } } + 2 \rangle } \\ & { f o u n d \mathbf { t h e n } } \\ & { \mathbf { I } \underset { \leq \beta , \operatorname { f r e s h } ( \beta ) } { } : = \langle \mathbf { D L } ( \operatorname { E 2 } , \nu ; x , i ) , \mathbf { C } 2 2 ( \tau ^ { i } [ \beta , \eta + 1 ] ) \circ \mathbf { C } 2 2 ( [ \beta , \eta + 1 ] ) , \boldsymbol { \mathscr { s } } + 1 \rangle } \end{array}$ $$ 38: yield ςE 39: procedure DTMINE $( T _ { \mathfrak { E } } )$ 40: for all $[ B , E ] \in \mathbb { I } _ { T _ { \mathfrak { E } } ( \cdot ) ( \mathfrak { c } ) } \big ( [ 1 , | T _ { \mathfrak { E } } | ] \big )$ do 423: 41: f $\tau T _ { \mathfrak { E } } [ B , E ]$ $\begin{array} { r } { \sigma _ { B + j - 1 , 0 } ^ { \mathfrak { C } } : = \langle \_ { \mathsf { T a W \_ d a t a } , [ \mathrm { d i a } \_ i \mapsto T _ { \mathfrak { C } } ( B + j ) ( i ) ] _ { 1 \leq i \leq d - 1 } , 1 \rangle } ^ { \sharp } } \end{array}$ $\begin{array} { r } { \mathbf { \phi } ^ { \mathrm { g } } \mathbf { l } ^ { D } , \mathbf { \phi } _ { j } ^ { L } < E - B + 1 } \end{array}$ 44: σE ← {σBE+j 1,0} 45: for $i$ from 1 to $d - 1$ do 46: $\tau ^ { i } \gets ( \tau ( 1 ) ( i ) , \dots , \tau ( | T _ { \mathfrak { e } } | ) ( i ) )$ 47: for all $x \in \{ \mathrm { i } , \mathrm { a } , \mathrm { s } , \mathrm { v } \}$ do 48: for all $[ b , e ] \in \mathcal { G } _ { x , \nu } ^ { i , \tau }$ do 49: $\sigma ^ { \mathfrak { E } } \sigma ^ { \mathfrak { E } } \cup$ DTMINESTEP(τ, i, x, true, $[ b , e ] ^ { \cdot }$ ) 50: $\sigma ^ { \mathfrak { E } } \sigma ^ { \mathfrak { E } } \cup$ DTMINESTEP $( \tau , i , x .$ , false, $[ b , e ] )$ 51: return σE other neighbouring intervals to consider. This will only lead to returning the durative constituent at Line 21, where still Catch22 statistics are computed. As the time complexity of computing all the Catch22 statistics depends on the length of the interval of interest, we denote this as $C _ { s }$ . Thus, the worst time complexity happens when we have at least two intervals, leading not only to the generation of the aforementioned durative constituent, but potentially to other patterns occurring within the nested iteration. Given that within the nested loop leading to $O ( \ j ^ { 2 } )$ iterations we test queries having a time in $O ( \log { \mathcal { s } } )$ , we obtain an overall time complexity of $O ( s ^ { 2 } \log { \mathcal { s } C _ { s } } \frac { t } { s } N )$ . □ Serialization At serialization time, we avoid writing any potential redundant constituent having the same activation label but referring to a shorter time interval to the other sibling constituents, thus capturing the behaviour associated with the longest span. We generate a collection of taxonomies for reconducing each constituent label to the original MTS dimension the pattern was referring to: we generate a taxonomy $G _ { i } \ : = \ : \langle N _ { G } , R _ { G } , \ell _ { G } \rangle$ for each dimension $i$ , where $N _ { G }$ contains all the DTs associated with $d i m _ { i } ^ { x }$ thus including $d i m _ { i } ^ { x }$ acting as the root $\ell _ { G }$ ; $R _ { G }$ connects only the root with the associated DT labels. Given the impossibility of serializing such data in XES [2] due to their strict assumptions on the log representation, we serialize the resulting information on a customary representation in json. # 4.2. Ad Hoc Explainability We load the single log composed of many traces with classes changing in time as many segmented logs ${ \mathfrak { S } } _ { y }$ as the total number of classes distinctly occurring by segmenting the traces in maximal intervals where the same class holds. Then, differently from our previous solution, where each segmented log was mined separately via Bolt2 [9] to then derive the trace embedding in a later phase, we contemporarily load and mine all the segmented logs containing maximal sequences referring to the same classes. We then proceed to the DECLARE clause refinement for the ones shared across the logs as a result of the mining phase: as generic dataless DECLARE clauses with DT activity labels might be not sufficient to temporally characterize the classes, we try to specialize each clause according to the specificity of each log/class. The rationale is the following: if the same dataless clause is frequently mined from two segmented logs, at least one trace exists from each of these where the dataless clause is both activated and holds. Then, the only way to differentiate the clause across the two segmented logs is to change the satisfiability outcome of the trace by refining the original dataless clause with dataful predicates for either the activation or the target condition. Suppose the payload referring to the constituents activated by the clause can be differentiated. In that case, we can refine the original dataless clause as two dataful ones, each with an activation condition (p and $p ^ { \prime }$ ) satisfying only the activations from one segmented log. This leads to one dataful clause being then activated over the traces of one segmented log, but being vacuously satisfied by the traces of the other. If that will not suffice, then the only further possible way to differentiate is to generate target predicates similar to those above: in this other scenario, the missed satisfaction of the target condition will lead to a violation of the clause, thus further refining the trace behaviour in terms of clause (un)satisfaction. This significantly differs from other deviance mining approaches $I 2 3 J$ , where activation conditions are mainly clustered not necessarily according to the target classes of interest [12], thus not achieving a truthful clause separation for differentiating them across the distinct logs as discussed above. By exploiting a decision tree as a classifier, we can ensure that the data predicates used to differentiate the classes will summarize the data properties as unary dataful predicates, which might reconstructed by traversing each branch of the learned decision tree by interpreting it as a single propositional formula $\pi$ . Trace Loading and Segmentation. Similarly to our previous paper, for each trace $\sigma ^ { \mathfrak { E } _ { A } }$ in the json-serialized log $\mathfrak { S }$ describing a distinct MTS, we scan all polyadic events, and we group them by maximal contiguous sequences associated to the same class $y$ . Each class $y$ will then correspond to one single final log ${ \mathfrak { S } } _ { y }$ , where each maximal contiguous sequence of polyadic events will correspond to one single new polyadic trace in ${ \mathfrak { S } } _ { y }$ . As this transformation also preserves the __raw_data information associated with each polyadic event, we can then later on collect just the raw data associated with the polyadic traces for later on running atemporal event classification tasks by just correlating the class within the __raw_data payload with the other available values. # Algorithm 2 Ad Hoc Explainability 1: function POLYADICDATAAWARE(S1, . . . , Sn; θ) 2: $\begin{array} { r } { \Sigma \bigcup _ { \mathfrak { S } _ { \iota } = \langle \mathbb { G } , \mathcal { L } \rangle , \langle N _ { G } . R _ { G } , \ell _ { G } \rangle \in \mathbb { G } } N _ { G } } \end{array}$ 1 ▷ Determining the alphabet 3: Φ← ▷ Clauses not requiring dataful refinement 4: for all $1 \leq \iota \leq n$ do 5: for all $\sigma ^ { i } \in \mathcal { L } , \mathfrak { S } _ { \iota } = \langle \mathbb { G } , \mathcal { L } \rangle$ do 6: DataF $\mathrm { r a m e } [ ( \iota , i ) , \mathsf { c l a z z } ] { : = } \iota$ ▷ Segmented trace/class assoc. 7: $\underline { { P } } _ { \iota } \gets$ FREQUENTITEMSETS $( \mathfrak { S } _ { \iota } , \theta )$ ▷ Maximal length of 2 8: FreqPairs , $\varPhi _ { \iota } \gets $ GENERATEUNARYCLAUSES(Pι) ▷ Unary clauses [9] 9: $\boldsymbol { \varPhi } \gets \mathrm { U N A R Y R E F I N E } ( \varPhi _ { 1 } , \dots , \varPhi _ { n } , \varPhi ; \mathfrak { S } _ { 1 } , \dots , \mathfrak { S } _ { n } )$ ▷ Algorithm 3 10: $\begin{array} { r } { \mathrm { F P } { } \bigcup _ { 1 \leq \iota \leq n } \mathrm { F r e q P a i r s } _ { \iota } } \end{array}$ 11: for all $\langle A , B \rangle \in \mathrm { F P }$ do 12: $C \gets \{ { \star ( A , B ) } | { \star \in B i n a r y C l a u s e s } \}$ ▷ Maximising recall by probing all clauses 13: if ∃!j. $\langle A , B \rangle \in { \mathrm { F r e q P a i r s } } _ { j }$ then 14: $\varPhi \gets \bar { \phi } \cup \bar { C }$ 15: else 16: ${ \varPhi } \gets { \varPhi } \cup$ BINARYREFINE(A, B, S , . . . , S ; true) ▷ Algorithm 4 17: global DataFrame 18: for all $1 \leq \iota \leq n$ do ▷ Dataless [14] 19: for all $\sigma ^ { i } \in \mathcal { L } , \mathfrak { S } _ { \iota } = \langle \mathbb { G } , \mathcal { L } \rangle$ do 20: for all clause $\in \varPhi$ do 21: DataFrame[ $( \iota , i )$ , clause]:=1σi =clause − 1σi =clause 22: return DECISIONTREE $( \{ ( [ k \to v ] , r o w [ \mathrm { c l a z z } ] ) | r o w \in \mathrm { D a t a F r a m e } \} )$ Polyadic Deviant Model Learning. Algorithm 2 showcases the procedure for extracting an explainable specification capturing MTS classes’ temporal characterization (Line 6). This is achieved by considering DECLARE clauses, be they data or dataless, as features for describing class-segmented traces. By interpreting the numerical values as a violation, satisfaction, or vacuous satisfaction of a specific clause, we then use a white box classifier such as Decision Tree to extract a propositional representation for the clauses’ satisfaction. Similarly to our previous implementation [9,14], we consider all the most frequent itemsets for at most two activity labels and support of at least $\theta$ (Line 7). Unary DECLARE clauses such as Init, End, and Exists are mined from the frequently-occurring unary patterns as per Bolt2 (Line 8). We further proceed on the refinement for each of the clauses into their dataful counterpart if and only if the payload-based refinement allows us to substantially differ the traces’ behaviour according to the constituents’ payloads and, otherwise, we backtrack to the original dataless variants (Line 21). Similar considerations can be carried out for the remaining binary clauses: given all the possible binary frequent patterns occurring across all logs (Line 11), we perform no refinement if such pair is frequent only in one segmented log (Line 13) and, otherwise, we proceed by refining the clauses when possible (Line 16). Last, we train the white-box classifier over the extracted embedding for each class-segmented trace, from which we derive a propositional and declare-based characterization for each MTS class. Unary Refinement. Algorithm 3 outlines the dataful refinement of the unary clauses of (All)Init, (All)End, and (All)Exists, which is attempted if and only if we can mine the same clause from at least two class-segmented logs (Lines 17, 23, and 29). When this occurs, we collect all the payloads associated with the constituents occurring at the beginning (Line 20), end (Line 26) or from any constituent (Line 32) according to the clause to be refined. Lines 9 and 10 achieve the polyadic extension for DECLARE by considering the $\mathsf { A l l \star }$ clause variaties. We also consider all the predicates given within the decision tree and representing one class of interest and put them in disjunction, thus considering conditions generally identifying classes (Line 11). No refinement is provided if the decision tree cannot adequately separate the activation payloads according to the associated class to generate refined dataful activation predicates (Line 6). As the explanation generated from the decision tree’s ex post phase provides a propositional formula of declarative clauses, we discard the refinement for Absence, as this can still appear as the negation of any Exist clause. Binary Refinement. Algorithm 4 describes refining some Polyadic DECLARE binary clauses of choice. Given that the clauses expressing (Excl)Choice, CoExistence, and RespExistence can be easily formulated as propositional formulas composing Exists, we focus on refining the clauses that cannot be characterized through the process above: Precedence, Response, ChainPrecedence, and ChainResponse. We also discard ChainSuccession (and Succession) as those are composite clauses derivable from the former by conjunction, thus being also derivable from a propositional formula. For the time being, we do not consider $\mathsf { A l l \star }$ template variants, thus considering clause satisfaction if there are no violations across all activations within a trace. After recalling that this procedure is invoked for any pair of activity labels A and B appearing as frequent in at least two segmented logs (Algorithm 2, Line 16), we first collect the evidence for the fulfilment of the aforementioned dataless clauses (narrated in the next paragraph). As this enables the collection of all the activated (Line 21) and targeted # Algorithm 3 Ad Hoc Explainability: unary clause dataful refinement 1: function REFINEATTEMPT $( D , { \mathcal { D } } , L , n ; \vartheta = . 7 )$ 2: global DataFrame 3: $\breve { D } _ { \mathrm { t r a i n } } , D _ { \mathrm { t e s t } } \mathrm { S P L I T } ( D , \vartheta )$ 4: $m \gets$ DECISIONTREE $( D _ { \mathrm { t r a i n } } )$ 5: $\mathcal { Y } [ \iota \mapsto \bigwedge _ { \pi \in m , \pi = H \Rightarrow \iota } H ] _ { 1 \leq \iota \leq n }$ 6: if ACCURACY $( m , D _ { \mathrm { t e s t } } ) { \leq 5 0 \% }$ then return false 7: for all $( \iota , \sigma ^ { i } ) \in \mathrm { d o m } ( { \mathcal { D } } )$ do 8: for all $\pi \in { \mathfrak { m } }$ do 9: DataFra∈me[(ι, i), AllL(⋆, π)]:=B(|D(ι, σi)| = p D(ι,σi) 1π(p)) 10: DataFrame[(ι, i), L(⋆, π)]:=B(0 < p D(ι,σi) 1π(p)) 11: for all $\iota \in \mathrm { d o m } ( \mathcal { Y } )$ do 12: $\pi y ( \iota )$ 13: $\begin{array} { r l } & { \colon [ ( \iota , i ) , \mathsf { A l l } L ( \star , \pi ) ] { : = } \mathbb { B } \big ( | \mathscr { D } ( \iota , \sigma ^ { i } ) | = \sum _ { p \in \mathscr { D } ( \iota , \sigma ^ { i } ) } \mathbf { 1 } _ { \pi ( p ) } \big ) } \\ & { \mathrel { : } [ ( \iota , i ) , L ( \star , \pi ) ] { : = } \mathbb { B } \big ( 0 < \sum _ { p \in \mathscr { D } ( \iota , \sigma ^ { i } ) } \mathbf { 1 } _ { \pi ( p ) } \big ) } \end{array}$ 14: return true 16: function UNARYRE $\mathrm { \mathrm { \mathrm { I N E } } } ( \varPhi _ { 1 } , \dots , \varPhi _ { n } , \varPhi ; \mathfrak { S } _ { 1 } , \dots , \mathfrak { S } _ { n } )$ 17: refineIni $\therefore \exists \mathbf { a } \in \Sigma . \exists i , j \in \mathbb { N } . I n i t ( \mathbf { a } , \mathbf { t r u e } ) \in \varPhi _ { i } \cup \varPhi _ { j }$ 18: if refineInit then $D$ Init refinement 19: $\begin{array} { r l } & { \mathcal { D } \gets \left[ ( \iota , \sigma ^ { i } ) \mapsto S \right] _ { 1 \leq \iota \leq n , \mathfrak { S } _ { \iota } = \langle \mathbb { G } , \mathcal { L } \rangle , \sigma ^ { i } \in \mathcal { L } , \sigma _ { 1 } ^ { i } = \langle S , \iota \rangle } } \\ & { D \gets \Big \{ \left( \kappa ( \varsigma _ { 1 , k } ^ { i } ) , \iota \right) \Big | \varsigma _ { 1 , k } ^ { i } \in \sigma _ { 1 } ^ { i } , \sigma ^ { i } \in \mathcal { L } , \mathfrak { S } _ { \iota } = \langle \mathbb { G } , \mathcal { L } \rangle , 1 \leq \iota \leq n \Big \} } \end{array}$ 20: 21: refineInit $$ REFINE ATTEMPT(D, D, Init, n) 22: if not refineInit then $\begin{array} { r } { \varPhi \gets \varPhi \cup \{ I n i t ( \mathsf { b } , \mathbf { t r u e } ) | I n i t ( \mathsf { b } , \mathbf { t r u e } ) \in \bigcup _ { 1 \leq \iota \leq n } \varPhi _ { \iota } \} } \end{array}$ 23: refineEnd $| \exists \mathbf { a } \in \Sigma . \exists i , j \in \mathbb { N } . E n d ( \mathbf { a } , \mathbf { t r u e } ) \in \varPhi _ { i } \cup \varPhi _ { j }$ 24: if refineEnd then $D$ End refinement 25: $\begin{array} { r l } & { \mathcal { D } \lfloor ( \iota , \sigma ^ { \iota } ) \mapsto S \rfloor _ { 1 \leq \iota \leq n , \mathfrak { S } _ { \iota } = \langle \mathfrak { G } , \mathcal { L } \rangle , \sigma ^ { i } \in \mathcal { L } , \sigma _ { | \sigma ^ { i } | } ^ { i } = \langle S , \iota \rangle } } \\ & { D \{ ( \kappa ( \varsigma _ { | \sigma ^ { i } | , k } ^ { i } ) , \iota ) | \varsigma _ { | \sigma ^ { i } | , k } ^ { i } \in \sigma _ { | \sigma ^ { i } | } ^ { i } , \sigma ^ { i } \in \mathcal { L } , \mathfrak { S } _ { \iota } = \langle \mathfrak { G } , \mathcal { L } \rangle , 1 \leq \iota \leq n \} } \end{array}$ 26: 27: refineEnd $$ REFINE $\mathtt { A T T E M P T } ( D , { \mathcal { D } } , \mathsf { E n d } , n )$ 28: if not refineEnd then $\begin{array} { r } { \phi \phi \cup \{ E n d ( \mathsf { b } , \mathsf { t r u e } ) | E n d ( \mathsf { b } , \mathsf { t r u e } ) \in \bigcup _ { 1 \leq \iota \leq n } \varPhi _ { \iota } \} } \end{array}$ 29: refineE $\mathfrak { x } \exists \mathsf { a } \in \Sigma . \exists i , j \in \mathbb { N } . E x i s t s ( \mathsf { a } , \mathbf { t r u e } ) \in \varPhi _ { i } \cup \varPhi _ { j }$ 30: if refineEx then $D$ Exists refinement 31: $\begin{array} { r l } & { \mathcal { D } \gets [ ( \iota , \sigma ^ { i } ) \mapsto \{ { \varpi ( \varsigma _ { j , - } ^ { i } ) } | \varsigma _ { j , - } ^ { i } \in \sigma _ { j } ^ { i } , \sigma ^ { i } \in \mathcal { L } \ \} ] _ { 1 \leq \iota \leq n , \mathfrak { S } _ { \iota } = \langle \mathbb { G } , \mathcal { L } \rangle } } \\ & { D \gets \{ ( \kappa ( \varsigma _ { j , k } ^ { i } ) , \iota ) \Big | \varsigma _ { j , k } ^ { i } \in \sigma _ { j } ^ { i } , \sigma ^ { i } \in \mathcal { L } , \mathfrak { S } _ { \iota } = \langle \mathbb { G } , \mathcal { L } \rangle , 1 \leq \iota \leq n \} } \end{array}$ 32: 33: refineEx $$ REFINE ATTEMPT(D, D, Exists, n) 34: if not refineEx then $\begin{array} { r } { \phi \phi \cup \{ E x i s t s ( \mathsf { b } , \mathbf { t r u e } ) | E x i s t s ( \mathsf { b } , \mathbf { t r u e } ) \in \bigcup _ { 1 \leq \iota \leq n } \varPhi _ { \iota } \} } \end{array}$ 35: return $\varPhi$ # Algorithm 4 Ad Hoc Explainability: binary clause dataful refinement 1: procedure FILLINDATAFRAME $( S _ { i } ^ { \iota } , \ell )$ 2: global DataFrame 3: if $S _ { i } ^ { \iota } = \varnothing$ or $S _ { i } ^ { \iota } = \mathsf { V a c }$ then 4: DataFrame $[ ( \iota , i ) , \ell ] { : = } 0$ 5: else if $\mathsf { V i o l } \in \mathsf { \bar { S } } _ { i } ^ { \iota }$ then 6: DataFrame $[ ( \iota , \bar { \iota } ) , \ell ] { : = } 1$ 7: else 8: DataFrame $[ ( \iota , i ) , \ell ] { : = } 1$ 9: function BINARYREFINE(A, B, S1, . , Sn; poly = true) 10: template $$ [ChainResponse, ChainPrecedence, Precedence, Response] 11: shorthand $\mathsf { s } \gets [ { \mathsf { c r } } , { \mathsf { c p } } , \mathsf { p } , \mathsf { r } ]$ 12: dictionary $ Z$ IP(template,shorthands) 13: $\mathrm { p a i r s } \gets \dot { \{ \langle { \sf A } , { \sf B } \rangle } , \langle { \sf B } , \mathbf { \bar { A } } \rangle \}$ 14: for all $1 \leq \iota \leq n$ do 15: $\mathfrak { S } _ { \iota } = \langle \mathbb { G } , \mathcal { L } _ { \iota } \rangle$ 16: poly $$ if $\mathtt { \backslash } \mathtt { \circ o l y }$ and $\exists G \in \mathbb { G } . \mathsf { A } , \mathsf { B } \in N _ { G } )$ then true else false 17: $\mathbf { \bar { C } } \mathbf { H A I N S } ( \mathsf { A ^ { \prime } } , \mathsf { B ^ { \prime } } , \theta , \mathsf { p o 1 y } , \mathsf { h e u r } ; \mathfrak { S } _ { y } )$ ▷ Algorithm 5 18: $\mathrm { R E S P P R E C } ( \mathsf { A ^ { \prime } } , \mathsf { B ^ { \prime } } , \theta , \mathsf { p o l y } , \mathsf { h e u r } ; \mathfrak { S } _ { y } )$ ▷ Algorithm 5 19: for all $\mathsf { c } ( \mathsf { A } ^ { \prime } , \mathsf { B } ^ { \prime } )$ s.t. $\langle \mathsf { A } ^ { \prime } , \mathsf { B } ^ { \prime } \rangle$ pairs and $c$ template do 20: short $$ dictionary[c] 21: $D _ { \mathrm { a c t } } \longleftarrow \Big \{ \big ( \kappa ( { \varsigma } _ { j , k } ^ { i } ) , \boldsymbol { \iota } \big ) \Big | \varsigma _ { j , k } ^ { i } , \langle \varsigma _ { j , k } ^ { i } , . . , \rangle \in \mathrm { a c t } ( \boldsymbol { \iota } ) _ { \mathsf { A } ^ { \prime } , \mathsf { B } ^ { \prime } } ^ { \mathrm { s h o r t } } , 1 \leq \iota \leq n \Big \}$ 22: if $D _ { \mathrm { a c t } } = \emptyset$ then yield $\mathsf { c } ( \mathsf { A } ^ { \prime } , \mathsf { B } ^ { \prime } )$ 23: else 24: $m \mathop { } \mathrm { D e c t s I o N T R E E } ( D _ { \mathrm { a c t } } )$ 25: $\mathbf { i f } \ \mathrm { P U R I T Y } ( m ) > 5 0 \%$ then 26: for all $\pi \in { \mathfrak { m } }$ do ▷ Refine by activations 27: for all $1 \leq \iota \leq n$ , $\mathfrak { S } _ { \iota } = \langle \_ , \mathcal { L } \rangle$ , $\varsigma _ { j , k } ^ { i } , \langle \varsigma _ { j , k } ^ { i } , . . \rangle \in \mathrm { a c t } ( \iota ) _ { \mathsf { A } ^ { \prime } , \mathsf { B } ^ { \prime } } ^ { \mathrm { s h o r t } }$ do 28: test $ \pi ( \kappa ( \varsigma _ { j , k } ^ { i } ) )$ 29: if $\mathsf { \varsigma } _ { j , k } ^ { i } \in \mathrm { v i o l } ( \iota ) _ { \mathsf { A } ^ { \prime } , \mathsf { B } ^ { \prime } } ^ { \mathrm { s h o r t } }$ then 30: $S _ { i } ^ { \iota } \gets S _ { i } ^ { \iota } \cup \{ \mathrm { t e s t } \Im \mathsf { v i o l } : \mathsf { V a c } \}$ 31: else 32: ${ S _ { i } ^ { \iota } } \gets S _ { i } ^ { \iota } \cup \{ \mathrm { t e s t } ? \mathsf { S a t } : \mathsf { V a c } \}$ 33: for all $\sigma ^ { i } \in \mathfrak { S } _ { \mathfrak { c } }$ do FILLINDATAFRAME(Siι, c(A′, π, B′, true)) 34: else 35: $\begin{array} { r l } & { \mathbf { \Theta } _ { D _ { \mathrm { t g t } } } ^ { \mathrm { a s s } } \{ ( \kappa ( \varsigma _ { h , k ^ { \prime } } ^ { i } ) , \iota ) \Big | \varsigma _ { j , k } ^ { i } , \varsigma _ { h , k ^ { \prime } } ^ { i } \in \mathrm { a c t } ( \iota ) \mathring \mathbf { \Xi } _ { \mathsf { A } ^ { \prime } , \mathsf { B } ^ { \prime } } ^ { \mathrm { b o r t } } , 1 \le \iota \le n \} } \\ & { m \gets \mathrm { D E C I S I O N T R E E } ( D _ { \mathrm { t g t } } ) } \end{array}$ 36: 37: if PURITY $( m ) > 5 0 \%$ then 389: fofroralall $\pi \in { \mathfrak { m } }$ ≤don, $\mathfrak { S } _ { \iota } = \langle \_ , \mathcal { L } \rangle$ , $\varsigma _ { j , k } ^ { i } , \langle \varsigma _ { j , k } ^ { i } , . . \rangle \in \mathrm { a c t } ( \iota ) _ { \mathsf { A } ^ { \prime } , \mathsf { B } ^ { \prime } } ^ { \mathrm { s h o r t } }$ dRoefine by targets 40: test $ \pi ( \kappa ( \varsigma _ { j , k } ^ { i } ) )$ 41: if $\varsigma _ { j , k } ^ { i } \in \mathrm { v i o l } ( \iota ) _ { \mathsf { A } ^ { \prime } , \mathsf { B } ^ { \prime } } ^ { \mathrm { s h o r t } }$ then 42: $S _ { i } ^ { \iota } \gets S _ { i } ^ { \iota } \cup \left\{ \begin{array} { r l r l } \end{array} \right.$ test ? Viol : Vac 43: else 44: $S _ { i } ^ { \iota } \gets S _ { i } ^ { \iota } \cup \left\{ \begin{array} { r l r l } \end{array} \right.$ test ? Sat : Vac 45: for all $\sigma ^ { i } \in \mathfrak { S } _ { \mathfrak { c } }$ do FILLINDATAFRAME(Siι, c(A′, true, B′, π)) 46: else yield $\mathsf { c } ( \mathsf { A } ^ { \prime } , \mathsf { B } ^ { \prime } )$ ▷ Backtracking to the dataless clause (Line 35) constituents’ payloads, we can then see if is possible to extract a propositional characterization of the data through a decision tree; we consider the data summarization process through propositionalization successful if we tree achieves a suitable amount of purity (Lines 25 and 37). After extracting each path in such a tree as a binary predicate $\pi$ , we consider all the activations first within a single trace $\sigma ^ { i }$ of a segmented log $\mathfrak { S } _ { \iota }$ : differently from the refinement phases performed in DML [12] and in event-based mining algorithms [23], we change the clause satisfaction according to the joint fulfilment of the predicate $\pi$ to generate a correct representation of the embedding: activated conditions that are no more satisfying $\pi$ will always lead to a non-activation of the clause, and otherwise we retain the violation/satisfaction condition (Lines 28-32). We then consider the trace as violating the refined clause if at least one constituent leads to its violation (Line 28); we consider the clause vacuously satisfied if was never activated before or if the addition of the new $\pi$ activation condition leads to de-activation of the clause testing (Line 4), and we consider as the trace being globally satisfied by the clause otherwise (Line 8). We draw similar considerations for the refinement of the target conditions. If neither of these two attempts at refining is satisfactory, we backtrack to the dataless clause (Line 45). Algorithm 5 extends the mechanism in Bolt2 [9] and used in the previous paper for efficiently mining the aforementioned clauses by shifting from the collection of the trace IDs activating (sat), violating (viol), or not activating (vac) the clauses, to retaining which constituents precisely generate these conditions. This is performed to reconstruct the payloads associated with the activated and targeted conditions, and to reconstruct which constituent led to the violation of the clause. We denote all the possible target conditions for each activation leading to clause satisfaction as pairs of constituents associated with the activation. Bolt2 (e.g., Line 4 and Line 29) did not consider different overlapping constituents referring to the same MTS variable to avoid redundant and obvious correlations. To recognize those, we now use activity label taxonomies: for constituents expressing distinct DT associated with the same MTS variable, we are interested in establishing temporal correlations between activated and targeted constituents if and only if the targeted (or activated) constituent terminates before the occurrence of the forthcoming activated (or targeted) one while always referring to the same variable. Bolt2 only considered pointwise non-polyadic events with no associated durative information, thus proving to be inadequate to support these new features. # 5. Empirical Results We exploit some time series datasets made available through the sktime library [24] while considering the Dyskinetic event dataset from our previous contribution [14]. Italy Power Demand [22] provides a dataset of univariate and equal-length time series, where the classification label is used to differentiate energy consumption patterns in the OctoberMarch period vs the April-September ones. The Basic Motions dataset5 considers multivariate time series with 6 dimensions and distinguishing different motion types, walking, resting, running, and badminton. This dataset collects the $x , y ,$ , and $x$ axis information from wrist-watch accelerometer and a gyroscope; this dataset has been used to compare this motion dataset with the one discussed in our previous work while remarking # Algorithm 5 Polyadic Mining for (Chain)Response and (Chain)Precedence. 1: procedure $\mathbf { C } \mathrm { H A I N S } ( \mathsf { A } , \mathsf { B } , \theta , \mathsf { p o l y } ; \mathcal { L } _ { y } )$ 2: for all $\sigma ^ { i } \in \mathcal { L } _ { y }$ do 3: for all $\varsigma _ { j , k } ^ { i }$ s.t. $\lambda ( \varsigma _ { j , k } ^ { i } ) = \mathsf { A }$ do 4: span $$ if poly then $\pi ( \varsigma _ { j , k } ^ { i } )$ else 1 5: if $\mathcal { J } k . \lambda ( \varsigma _ { j + \mathsf { s p a n } , k } ^ { i } ) = \mathsf { B }$ then $\{ \mathsf { a c t } ( y ) _ { \mathsf { A , B } } ^ { \mathsf { c r } } . \mathsf { a d d } ( \varsigma _ { j , k } ^ { i } )$ ; $\mathrm { v i o l } ( y ) _ { \mathsf { A , B } } ^ { \mathsf { c r } } . \mathrm { a d d } ( \varsigma _ { j , k } ^ { i } ) \}$ 6: else $\operatorname { a c t } ( \boldsymbol { y } ) _ { \mathsf { A , B } } ^ { \mathsf { c r } } . \mathrm { a d d } ( \langle \varsigma _ { j , k } ^ { i } , \varsigma _ { j + \mathsf { s p a n } , k } ^ { i } \rangle )$ for all $k$ s.t. $\lambda ( \varsigma _ { j + \mathsf { s p a n } , k } ^ { i } ) = \mathsf { B }$ . 7: if $j > 1$ then 8: $\dot { \mathrm { a c t } } ( y ) _ { \mathsf { A , B } } ^ { \mathsf { c p } } . \mathrm { a d d } ( \varsigma _ { j , k } ^ { i } )$ ; 9: if $\mathcal { J } h , k ^ { \prime } . \lambda ( \varsigma _ { h , k ^ { \prime } } ^ { i } ) = \mathsf { B }$ and $h + ( \mathrm { p o l y } \colon \ d _ { \mathbb { - } } ^ { 2 } \delta ( \varsigma _ { h , k ^ { \prime } } ^ { i } ) : 1 ) \le j$ then 10: $\mathrm { v i o l } ( y ) _ { \mathsf { A , B } } ^ { \mathsf { c p } } . \mathrm { a d d } ( \varsigma _ { j , k } ^ { i } )$ 11: else $\mathrm { a c t } ( \boldsymbol { y } ) _ { \mathsf { A , B } } ^ { \mathsf { c r } } . \mathrm { a d d } ( \langle \varsigma _ { j , k } ^ { i } , \varsigma _ { h , k ^ { \prime } } ^ { i } \rangle )$ for all $h , k ^ { \prime }$ s.t. $\lambda ( \varsigma _ { h , k ^ { \prime } } ^ { i } ) { \bf \varsigma } = { \bf \varsigma } \mathsf { B }$ and $h +$ $( \mathtt { p o l y } \cdot \mathtt { J } \delta ( \varsigma _ { h , k ^ { \prime } } ^ { i } ) : 1 ) \le j$ 12: procedure RESPPRE $\mathopen { } \mathclose \bgroup \mathopen { } \mathclose \bgroup \mathopen { } \mathclose \bgroup \mathopen { } \mathclose \bgroup \mathopen { } \mathclose \bgroup \mathopen { } \mathclose \bgroup \mathopen { } \mathclose \bgroup \mathopen { } \mathclose \bgroup \mathopen { } \mathclose \bgroup \left( \mathsf { A } , \mathsf { B } , \theta , \mathtt { p o l y } , \mathtt { h e u r } ; \mathcal { L } _ { y } \aftergroup \egroup \aftergroup \egroup \aftergroup \egroup \aftergroup \egroup \aftergroup \egroup \aftergroup \egroup \aftergroup \egroup \aftergroup \egroup \aftergroup \egroup \right)$ 13: if $\mathcal { J } \varsigma _ { h , k } ^ { i } . \lambda ( \varsigma _ { h , k } ^ { i } ) = \mathsf { A }$ then 14: return ▷ Not reporting vacuous satisfaction explicitly 15: else if $\mathcal { J } \varsigma _ { h , k } ^ { i } . \lambda ( \varsigma _ { h , k } ^ { i } ) = \mathsf { B }$ then 16: for all $\varsigma _ { j , k } ^ { i }$ s.t. $\lambda ( \varsigma _ { j , k } ^ { i } ) = \mathsf { A }$ do 17: $\mathrm { a c t } ( \boldsymbol { y } ) _ { \mathrm { A , B } } ^ { \dagger } . \mathrm { a d d } ( \varsigma _ { j , \boldsymbol { k } } ^ { i } ) ; \mathrm { v i o l } ( \boldsymbol { y } ) _ { \mathrm { A , B } } ^ { \dagger } . \mathrm { a d d } ( \varsigma _ { j , \boldsymbol { k } } ^ { i } ) ; \mathrm { a c t } ( \boldsymbol { y } ) _ { \mathrm { A , B } } ^ { \flat } . \mathrm { a d d } ( \varsigma _ { j , \boldsymbol { k } } ^ { i } ) ;$ 18: else 19: for all $\sigma ^ { i } \in \mathcal { L } _ { y }$ s.t. $\exists \varsigma _ { j , k } ^ { i }$ . $\lambda ( \varsigma _ { j , k } ^ { i } ) = \mathsf { A } \vee \lambda ( \varsigma _ { j , k } ^ { i } ) = \mathsf { B }$ do 20: if ̸ ∃h, k′. $\lambda ( \varsigma _ { h , k ^ { \prime } } ^ { i } ) = \bar { \mathsf { B } }$ then ▷ Only As 21: $\begin{array} { r } { \arctan ( y ) _ { { \mathsf { A } } , { \mathsf { B } } } ^ { \mathsf { r } } . \mathsf { a d d } ( \varsigma _ { j , k } ^ { i } ) ; \mathsf { v i o l } ( y ) _ { { \mathsf { A } } , { \mathsf { B } } } ^ { \mathsf { r } } . \mathsf { a d d } ( \varsigma _ { j , k } ^ { i } ) ; } \end{array}$ 22: $\mathsf { a c t } ( y ) _ { \mathsf { A } , \mathsf { B } } ^ { \mathsf { p } } . \mathsf { a d d } ( \varsigma _ { j , k } ^ { i } ) ; \mathsf { v i o l } ( y ) _ { \mathsf { B } , \mathsf { A } } ^ { \mathsf { p } } . \mathsf { a d d } ( \varsigma _ { j , k } ^ { i } ) ;$ 23: else if $\mathcal { J } h , k ^ { \prime } . \lambda ( \varsigma _ { h , k ^ { \prime } } ^ { i } ) = \mathsf { A }$ then ▷ Only Bs 24: $\begin{array} { r l } & { \mathrm { a c t } ( y ) _ { \mathrm { B , A } } ^ { \flat } \mathrm { . a d d } ( \varsigma _ { j , k } ^ { i } ) \mathrm { ; } \mathrm { v i o l } ( y ) _ { \mathrm { A , B } } ^ { \flat } \mathrm { . a d d } \big ( \langle \mathrm { N U L } \mathrm { L } , \varsigma _ { j , k } ^ { i } \rangle \big ) } \end{array}$ 25: else if $\lambda ( \varsigma _ { j , k } ^ { i } ) = \mathsf { A }$ then ▷ Both occur 26: $\arctan ( \boldsymbol { y } ) _ { \mathsf { A , B } } ^ { \mathsf { p } } . \mathrm { a d d } ( \sigma _ { j , k } ^ { i } )$ ; 27: i $\mathrm { \Delta } \exists { \varsigma } _ { j + h , k } ^ { i } , { \varsigma } _ { j , k ^ { \prime } } ^ { i }$ . $h \geq 0$ and $\lambda ( \varsigma _ { j + h , k } ^ { i } ) = \mathsf { B }$ and $\lambda ( \varsigma _ { j , k ^ { \prime } } ^ { i } ) = \mathsf { A }$ then 28: $\mathrm { v i o l } ( y ) _ { \mathsf { B } , \mathsf { A } } ^ { \mathsf { p } } . \mathrm { a d d } ( \varsigma _ { j , k } ^ { i } )$ 29: if ̸ ∃h, k′. $\lambda ( \varsigma _ { h , k ^ { \prime } } ^ { i } ) = \mathsf { B }$ and $j + ( \mathrm { p o l y } : \pi ( \varsigma _ { j , k } ^ { i } ) : 1 ) { \le } h$ then 30: $\mathrm { v i o l } ( y ) _ { \mathsf { A , B } } ^ { \mathsf { r } } . \mathrm { a d d } ( \varsigma _ { j , k } ^ { i } )$ 31: else $\operatorname { a c t } ( y ) _ { \mathsf { A , B } } ^ { \mathsf { r } } . \mathrm { a d d } ( \langle \varsigma _ { j , k } ^ { i } , \varsigma _ { h , k ^ { \prime } } ^ { i } \rangle )$ for all $h , k ^ { \prime }$ s.t. $\begin{array} { r c l } { \lambda ( \varsigma _ { h , k ^ { \prime } } ^ { i } ) } & { = } & { { \sf B a n d \it { j } } + } \end{array}$ (poly ? π(ςji,k) : 1)≤h the inherent more difficult nature of solving this other clinical problem. The Dyskinetic dataset [14] attempts at categorize Dyskinetic/Off events in terms of different drug assumption patterns as well as motor sensors information. Differently from the previous set of experiments [14], we now consider for both our algorithms and the competing MTSC approaches the full dataset also considering the active principles assumption intakes estimated using a rough approximation from current literature [6]. We also use the OSULeaf [18] dataset to remark how a simple image classification problem can be also represented as a time series classification problem: the univariate series were obtained by color image segmentation and boundary extraction (in the anti-clockwise direction) from digitized leaf images of six classes: Acer Circinatum, Acer Glabrum, Acer Macrophyllum, Acer Negundo, Quercus Garryana, and Quercus Kelloggii. We consider such datasets for the following reasons: despite EMeriTAte was explicitly design to capture trend correlations across different dimensions, we also use univariate datasets such as Italy Power Demand and OsuLeaf to show that this approach can be also applied to simpler dataset. Furthermore, we consider another multivariate dataset also considering different motor sensors to showcase the peculiar difference between the problem posed by the identification of the Dyskinetic Events from simply detecting different movement patterns. We provide some general statistics for these in Table 2. Table 2: Dataset statistics Our benchmarks were conducted on a Dell Precision mobile workstation 5760 running Ubuntu 22.04. The specifications of the machine include an Intel® Xeon(R) W-11955M CPU $\textcircled { \omega } 2 . 6 0 \mathrm { G H z } \times 1 6$ , 64GB DDR4 3200MHz RAM, with $5 0 0 \mathrm { G B }$ of free disk space. # 5.1. Run-time efficiency of EMeriTAte vs EMeriTAte+DF Fig. 2 refers to the running times of both our proposed MTSC algorithms, from both our previous contribution and from the current paper. The first three phases refer to the A Priori explainability segment, while the last two refer to the Ad Hoc one. In both scenarios, we consider $\theta = 0$ for maximising the DECLARE recall, and consider in both circumstances a sensitivity parameter of $\varepsilon = 1 0 ^ { - 4 }$ . We considered the time required to fully process each dataset. Although the loading times of the datasets are comparable, we observe that the loading phase in this novel solution, which is different from the former, also includes a time series indexing phase. This then leads to a significant improvement in time minimisation, also ascribable to discarding some redundant DT activity labels. This is also witnessed in Italy Power Demand Basic Motions OsuLeaf Dyskinetic Events ubl Phase Mining Loading Serialization Embedding Generation Training Algorithm the serialization phase, which has slightly decreased despite the addition of Catch22 payloads. Running times for the mining phase in EMeriTAte $+ \mathrm { D F }$ remark that this algorithm is heavily dominated by the number of traces and segments being available as, in these situations, the former version of the algorithm was heavily hampered by conducting the mining and the embedding phases in two different times, thus requiring to load the data and index it several times. The newly proposed approach might have a more significant overhead in some situations. In fact, generating a dataful representation of the clauses requires accessing the constituents’ payloads several times, thus increasing the number of times we access KnoBAB for retrieving log information. Notwithstanding the former, this leads to a decrease in running time (Fig. 3). Italy Power Demand Basic Motions OsuLeaf Dyskinetic Events Algorithm 1in Poe STC TapNet EMeriTAte EMeriTAte+DF Algorithm # 5.2. Comparing competing MTSC approaches We used the Python sktime library to ensure a uniform implementation for all competing approaches discussed in Section 2.3, for which we used the default hyperparameters setup from sktime. As all of the competing classifiers do not support classification of time series of different length, and given that these classifiers do not support time series varying their classification outcome in time, for the sole Dyskinetic event dataset, we consider the maximal projections belonging to the same class and, between each of these, we consider multiple sliding windows of the same size. This overall builds up the number of time series to be classified by competing approaches. Run-time efficiency. Fig. 3 compares the running time for all the MTSC approaches being trained over all the aforementioned datasets without splitting the data into training and testing datasets, thus using the same setting for the previous experiments. Competing approaches are not significantly impacted over the number of MTS dimensions as our approaches: please observe the tunning times for the Dyskinetic events where, despite the number of traces increased so to obtain multiple MTS of the same length, the competing approaches still provide a lesser running time if compared to our approaches. This is motivated by our solutions testing multiple possible correlations across dimension and activity labels, while other solutions such as Rocket and TapNet, attempt to capture crossdimensional correlations through convolutions, thus flattening up the values via dimensionality reduction. Similar considerations can also be carried out by the other classifiers, which have not been specifically designed to consider correlations across dimensions: Euclidean distance for MTS only considers dimensionality-wise similarity, CIF considers variations within one fixed time and length sliding window while considering variations only within one single dimension, and STC considers shapelets only occurring within one single dimension. On the other hand, TapNet is heavily dominated by the number of traces being considered at training time and, across all the competitors, is the one always requiring a considerable amount on training time. EMeriTAte solution overtakes EMeriTAte+DF over the Dyskinetic dataset, while the other dataset consistently shows similar running times. Classification outcomes. Fig. 4 provides the accuracy, precision, recall, and F1 score results across all the aforementioned datasets and competing approaches when testing the solutions with a $70 \% - 3 0 \%$ split between training and testing dataset using stratified k-fold sampling. For both EMeriTAte and EMeriTAte $+ \mathrm { D F }$ , we consider training the Decision Trees over the trace embeddings with a maximum depth of 5, so to minimise the chance of generating models overfitting the data. We run 10 experiments, for which we report the average score and the distance between the greatest and the lowest value obtained $( \pm )$ . When considering datasets with more than two classes, we use macro measures. Concerning the Dyskinetic dataset, we kept different training/testing splits from the ones in our previous paper. We observe that, for three datasets out of four, EMeriTAte+DF outperforms our previous implementation. The major score distance between these two solutions can be found on the OsuLeaf dataset: this can be explained as the main difference between the two solutions resides in the usage of the data refinement, while the discard of DT patterns seems not to have interfered with the classification outcome. For the Italy Power Demand dataset, we see that our first solution outperforms the current implementation: given the above, this can be ascribed to the presence of several fluctuations events which were not discarded in our previous solution. By comparing these solutions to the competing approaches, no competitor can constantly outperform the others across all the datasets and, when EMeriTAte+DF did not achieve maximum scores, it still achieved $\approx 9 9 \%$ of accuracy, macro precision, macro recall, and F1 score. E-KNN scores low values for datasets such as the former and OsuLeaf, thus remarking the impracticality of merely considering time series similarity as a valid pathway for classification outcomes for real-world datasets. (d) Dyskinetic Events Fig. 4: Training Results over the datasets of interest. Macro metrics are used for datasets containing more than 2 classes. Numbers in blue (red) remark the best (worst) results. While comparing the results over the Dyskinetic dataset, it is clear that the best approaches were the ones considering silhouette (STC) and attention ones (TapNet) while still scoring metrics way below $5 0 \%$ . This can be motivated as follows: either the process of splitting the time series lost some important correlation information or truncated some patterns, or these solutions still cannot account for extracting the best behavioural predictors for the dataset of interest. After analysing the models extracted from EMeriTAte+DF, we can clearly see that the classification always refers to correlations across different dimensions. Given the explanation being extracted, this then remarks that the dataset of interest is heavily characterized by correlations across dimensions that could not be adequately captured by the pre-existing solutions.
This paper offers a hybrid explainable temporal data processing pipeline, DataFul Explainable MultivariatE coRrelatIonal Temporal Artificial inTElligence (EMeriTAte+DF), bridging numerical-driven temporal data classification with an event-based one through verified artificial intelligence principles, enabling human-explainable results. This was possible through a preliminary a posteriori explainable phase describing the numerical input data in terms of concurrent constituents with numerical payloads. This further required extending the event-based literature to design specification mining algorithms supporting concurrent constituents. Our previous and current solutions outperform state-of-the-art solutions for multivariate time series classifications, thus showcasing the effectiveness of the proposed methodology.
[ "cs.DB", "cs.AI" ]
# Curation and Analysis of MIMICEL – An Event Log for MIMIC-IV Emergency Department Jia Wei1,∗, Chun Ouyang1, Bemali Wickramanayake1, Zhipeng ${ \mathsf { H } } { \mathsf { e } } ^ { 1 }$ , Keshara Perera2, and Catarina Moreira3 1School of Information Systems, Queensland University of Technology, Brisbane, 4000, Australia 2Queensland Health, Brisbane, 4000, Australia 3Data Science Institute, University of Technology Sydney, Sydney, 2007, Australia ∗corresponding author: Jia Wei (jia.wei@hdr.qut.edu.au) # ABSTRACT The global issue of overcrowding in emergency departments (ED) necessitates the analysis of patient flow through ED to enhance efficiency and alleviate overcrowding. However, traditional analytical methods are time-consuming and costly. The healthcare industry is embracing process mining tools to analyse healthcare processes and patient flows. Process mining aims to discover, monitor, and enhance processes by obtaining knowledge from event log data. However, the availability of event logs is a prerequisite for applying process mining techniques. Hence, this paper aims to generate an event log for analysing processes in ED. In this study, we extract an event log from the MIMIC-IV-ED dataset and name it MIMICEL. MIMICEL captures the process of patient journey in ED, allowing for analysis of patient flows and improving ED efficiency. We present analyses conducted using MIMICEL to demonstrate the utility of the dataset. The curation of MIMICEL facilitates extensive use of MIMIC-IV-ED data for ED analysis using process mining techniques, while also providing the process mining research communities with a valuable dataset for study. # Background & Summary Emergency departments (ED) are one of the most important hospital departments1. Due to the variety of diseases and the urgency of the patients, ED processes are complex and involve various activities requiring multidisciplinary human and medical resources2. This has also contributed to ED overcrowding, a widespread problem in the global healthcare sector. The issue is largely caused by the inability of emergency services to meet rising demand3, which can further compromise the quality and accessibility of healthcare services2. Therefore, it is necessary to analyse the flow of patients through ED to increase the effectiveness of the processes and reduce overcrowding4. In general, healthcare processes, such as the process of patient activities during their stay in an emergency department (referred to as the ED process), are regarded as dynamic, complex, multidisciplinary and ad-hoc5. In the absence of a comprehensive view of the end-to-end process, traditional analysis methods, such as interviews and group meetings to obtain insight into the process, is time-consuming, costly, and subjective6 when applied to ED process analysis. An increasing number of research works have investigated process mining for the healthcare domain6, with an aim to analyse healthcare process performance using process execution data recorded in the health information systems. Munoz-Gama et al.7 outline the distinct characteristics of healthcare processes and associated challenges that need to be addressed when utilising process mining for healthcare process analysis. In addition, Martin et al.8 provide recommendations for process mining researchers and stakeholders in the healthcare domain, respectively, for applying process mining to healthcare to enhance the usability and comprehension of these applications. In particular, process mining techniques are increasingly adopted to analyse patient flows in the ED. Delias et al.6 demonstrate the use of process discovery techniques to identify and analyse ED processes. The discovered process model can visualise the patient’s path through the emergency department. As a result of the visualisation of ED processes, process knowledge, such as activity frequency and process patterns, can be captured and used for process performance analysis (e.g., to identify bottlenecks that affect process efficiency). Similarly, Cho et al.9 introduce a process performance indicator framework for managing emergency room processes, assessing performance based on four dimensions: time, cost, quality, and flexibility. Event log data is the foundation of process mining and is usually represented in the form of sequential tabular data. An event log is a collection of cases, and each case consists of a sequence of events (ordered according to when they occurred)10. The availability of the event log is a prerequisite for applying process mining techniques. This work aims to extract event logs from the MIMIC-IV-ED11 dataset, an extensive, freely available database containing unidentifiable health-related data from Beth Israel Deaconess Medical Center. The MIMIC-IV-ED dataset contains data tables that capture individual patient activities during the ED process and are linked using an existing relational database schema. Although these data tables provide a snapshot of the patient journey, they do not depict the patient’s end-to-end process in the ED. To comprehend and analyse the ED process, we follow a well-established guideline12 for generating event logs and name this log MIMICEL. The extracted MIMICEL is intended to capture an end-to-end patient journey, facilitating the analysis of existing patient flows to enhance the efficiency of the ED process. Furthermore, the curation of MIMICEL makes the MIMIC-IV-ED data accessible in the form of an event log, which enables the use of process mining techniques to analyse ED processes. It also provides the process mining research communities with a valuable dataset for study. # Methods In existing research, various methods have been proposed to generate event logs. Remy et al.13 introduce a method that uses structured data from data warehouses and clinical guidelines to identify process-related data. This approach requires significant manual effort, including domain expert consultations, and is heavily based on domain-specific knowledge. Rojas et al.14 propose a method to extract data from hospital information systems to generate event logs for analysing ED processes. However, their approach relies on predefined expert queries to determine the data to be included in the event logs. Andrews et al.15 introduce a semi-automatic, domain-independent method for event log generation, focusing on integrating data quality assessment metrics in log generation. However, both methods lack a systematic procedure for creating event logs, which limits their reproducibility. In this study, we adhere to the guideline proposed by Jans et al.12, which is considered the most comprehensive and systematic approach for the extraction of event logs. It consists of nine steps that we follow to extract event logs specifically tailored to the objective of analysing ED processes. Below, we discuss the method used to extract an event log that captures the execution of ED processes from the MIMIC-IV-ED dataset11. Figure 1 depicts an overview of the method. # • Step 1: Identify the goal of ED process analysis This step focuses on defining the objective of event log generation, which needs to be aligned with the requirements of the project sponsor12. ED overcrowding remains a critical issue in healthcare, driven by the growing demand for emergency care and limited service availability3. Its consequences include prolonged patient waiting times, patients leaving without being treated, poor quality of care provided to patients and the high stress placed on emergency department staff2. Addressing these challenges by improving ED patient flow is a priority for healthcare providers16. To improve ED process efficiency, it is essential to first understand a patient’s journey within ED. This has motivated the main objective of this work, which is to capture an end-to-end patient journey using data from the MIMIC-IV-ED dataset. # • Step 2: Identify core activities in the ED process This step involves defining the process boundaries and identifying core activities relevant to stakeholders12, guided by domain knowledge. In this work, the widely-adopted conceptual model of emergency department crowding17 is referenced as the basis for identifying key activities in the ED process. This model identifies three interdependent components—input, throughput, and output—as contributors to ED crowding, emphasising the necessity of examining the entire acute care system to address the issue effectively. However, as this work focuses on activities within the ED, only the throughput component is considered. Figure 1. An overview of the method for generating MIMICEL The throughput component highlights the internal process in the ED, which is comprised of two primary phases. Upon arrival at the ED, patients would be triaged and placed in different rooms, constituting the first phase. Next, the patient would receive some diagnostic tests and treatment, mainly in the second phase. To this end, some patients may leave the ED without being treated completely. Based on consultations with an ED doctor, it was confirmed that the ED operates independently of other hospital services. Therefore, this work focuses on the activities described in the throughput component of the model17 as key process cornerstones specific to the ED: – Patient arrives at ED – Triage and room placement – Diagnostic evaluation and ED treatment – Patient disposition # • Step 3: Identify tables in the datasets that reflect ED core activities In this step, the key process activities identified in Step 2 serve as a guide for selecting key tables. In our work, all tables within the MIMIC-IV-ED dataset are essential for representing the core activities of an ED process, and therefore, all tables are included. Table 1 provides a detailed mapping between the cornerstone activities and their corresponding tables in the MIMIC-IV-ED dataset. Table 1. Mappings between cornerstones and key tables in the MIMIC-IV-ED dataset # • Step 4: Identify relationships between tables This step focuses on identifying the relationships among the tables selected in Step 3. In this work, the relationships between these tables are defined based on existing relational database schema of the MIMIC-IV-ED dataset. To illustrate these relationships, an Entity-Relationship (ER) diagram is provided in Figure 2. Figure 2. Relationships between key tables in the MIMIC-IV-ED dataset # • Step 5: Select documents relevant to ED process instance This step aims to define the boundaries of process instances (i.e., cases) by identifying the start document that triggers an instance of the process or the end document that signifies the completion of the process instance. In this work, the temporal information available in most tables of the MIMIC-IV-ED dataset supports the identification of start and end activities within the ED process. For the start of an ED stay, various activities may occur. If the patient’s arrival at ED is the initial event, the edstays table which “tracks patient admissions to the ED” is considered the start document that triggers the process. Alternatively, if routine vital signs measurement, medicine reconciliation or medicine dispensation occurs prior to the patient’s arrival at ED (e.g., in an ambulance), the corresponding table vitalsign, medrecon or pyxis are used as the start documents. At the end of an ED stay, patients are discharged from the ED. Information regarding the patient’s diagnoses, which is used for billing purposes, is recorded in the diagnosis table. This table serves as the end document, marking the completion of a patient’s ED process. # • Step 6: Select the ED process instance level This step determines the granularity of case in the extracted event log. In line with the primary objective of this study, the focus is on individual ED stays, each uniquely identified by stay_id, which serves as the case $I D$ in the extracted event log. The process instance documents identified in Step 5 represent the start and end activities of a single ED stay. It is important to note that a patient may have multiple ED stays, each treated as a separate process instance. # • Step 7: Identify activities relevant to ED process instances This step focuses on selecting relevant activities for the case level (identified in Step 6) based on the data recorded in the key tables identified in Step 3. In this study, all potential activities within a single ED stay are identified using data from the MIMIC-IV-ED dataset. According to the guideline12, candidate activities are expected to have temporal information stored in the database. Although triage is a key activity in ED processes17, the triage table in the MIMIC-IV-ED dataset does not provide timestamps. Based on the dataset documentation, “the closest approximation to triage time is the intime of the patient from the edstays table”11. Hence, we assign an artificial timestamp to the triage activity by adding “one second” to the time when the patient enters the ED (i.e., intime of the edstays table). This adjustment ensures that the triage activity does not affect the time of any subsequent activities. Table 2 lists the identified activities relevant to ED process instances and their corresponding temporal information from the relevant tables in the MIMIC-IV-ED dataset. At the end of this step, we have identified three mandatory attributes of an event log, i.e., case $I D$ , activity name and timestamp. Since the discharge time of an ED visit cannot precede or coincide with its entry time, cases violating this rule were filtered out. Table 2. Activities and their time information stored in the MIMIC-IV-ED dataset # • Step 8: Identify attributes relevant to ED process instance In this step, the objective is to identify all relevant attributes in addition to the three mandatory attributes of an event log, based on the available data. In this work, all data attributes stored in the MIMIC-IV-ED dataset are included as relevant attributes. # • Step 9: Relate attributes to activities This step aims to relate the attributes identified in Step 8 to either a case or to an event within an event log, i.e., case attribute or event attribute. An ED event log is then extracted based on these mappings. The following tables (Table 3 & Table 4) present a comprehensive description of the attributes in the event log and their corresponding categories. Timestamps listed in Table 2 are excluded from these tables. At this stage, MIMICEL has been extracted. Table 3. Descriptions of case attributes in the event log extracted from the MIMIC-IV-ED dataset Table 4. Descriptions of event attributes in the event log extracted from the MIMIC-IV-ED dataset # Data Records In this work, the extracted event log MIMICEL is provided in two different formats: one as a CSV file and another in XES (Extensible Event Stream) format18. Table 5 provides a summary of MIMICEL data statistics. The MIMICEL dataset has been published on PhysioNet19. Details about the dataset can be found via https://physionet.org/content/mimicel-ed/2.1.0/. Table 5. Descriptive statistics of MIMICEL The extracted mimicel.csv contains in total 7,568,824 events and 425,028 cases recording the ED stays of 205,466 patients in the MIMIC-IV-ED dataset11. Each row in the CSV file represents an execution of an event during an ED stay, and each column corresponds to an attribute of that event. The MIMICEL dataset has three mandatory attributes represented by three columns, namely stay_id (i.e., case ID of MIMICEL), activity and timestamps. Other columns in MIMICEL represent case attributes and event attributes. Descriptions of all columns representing case attributes and event attributes are provided in Table 3 and Table 4. Table 6 provides a snippet of MIMICEL, illustrating three cases identified by their respective stay_id 35146496, 32354539 and 30505340. These cases are associated with the same subject_id (i.e., 10010848), demonstrating that a single patient may have multiple ED visits. Events within each case are ordered according to their timestamps. Each ED visit is characterised by distinct case attributes; in this snippet, these attributes include arrival_transport, disposition and acuity. For example, the ED visit with stay_id 35146496 arrived at the ED through the ambulance and was sent home after discharge from the ED. These cases also have event attributes such as temperature, pain and seq_num, of which the values may change along with the execution of events. Table 6. A snippet of extracted MIMICEL in CSV format The mimicel.xes file applies a standard XML-based format for event logs, known as XES (which stands for “eXtensible Event Stream”)18. XES maintains the general structure of an event log and uses the term “trace” instead of “case”. This format is widely supported by process mining tools. The mimicel.xes file is created by converting from the original mimicel.csv file using the Python library PM4PY20. In the XES file, the case attributes from the CSV file are transformed into the trace attributes. An example snippet of the XES event log is shown in Figure 3, which corresponds to the example provided in Table 6. # Technical Validation In this section, we validate the quality of the extracted MIMICEL to ensure that no errors are introduced when the log extraction is performed and that the extracted log is reliable for analysing ED processes. The validation follows the quality assessment framework proposed by Vanbrabant et al.21, which integrates the taxonomy of Bose et al.22 to categorise quality issues in event logs and specifically addresses data quality issues in ED medical records. Vanbrabant et al.21 also introduce ${ \mathrm { D a Q A P O } } ^ { 2 3 }$ , an R-based tool designed for systematic quality assessment as an implementation of their framework. We use DaQAPO to assess the quality of MIMICEL, with details of the coding implementation provided in the Code Availability Section. The results, summarised in Table 7, are discussed below. Missing values This refers to data values that should have been recorded in the event log but are absent. In MIMICEL, certain attributes, including subject_id, gender and race, acuity, arrival_transport, and disposition are mandatory for each case. Assessing these attributes for missing values is crucial for ensuring data completeness. To identify missing values of these attributes, we employ the function missing_values from DaQAPO, which detects missing values at different levels of granularity, including activity and specified attribute columns23. As a result, we detected $1 . 6 4 \%$ of cases lacking a value for the attribute acuity. Table 7. Data quality issues identified in MIMICEL <string key="subject_id" value="10010848" A $\cdot = "$ $= "$ $= "$ 2165-10-31T13:45:00+00:00" /> It is essential to distinguish between missing values and null values, as null values can be valid in specific contexts. For example, the attribute hadm_id (hospital admission ID) will be null if a patient is not admitted to the hospital (e.g. when the disposition is “ADMITTED”). In addition, the number of incomplete patient ED visit records serves as an essential quality metric21. Each ED visit record in the MIMIC-IV-ED dataset is expected to include mandatory activities such as entering, triage, and discharge. Missing one or more of these activities indicates an incomplete ED visit. To detect such cases, we utilise the incomplete_cases function, which identifies records with missing one or more mandatory activities. Violation of mutual dependencies This issue refers to the violations in the mutual dependencies between elements within the event log, such as between activities, attributes, or between activities and attributes. For example, the MIMIC-IV-ED dataset documentation specifies a dependency between hadm_id and disposition. If a patient is admitted to the hospital after discharge (i.e., disposition $\mathbf { \tau } = \mathbf { \tau }$ "ADMITTED"), the hadm_id should include the corresponding hospital identifier. Conversely, if a patient is discharged home (i.e., disposition $\mathbf { \Lambda } : = " \mathrm { H O M E } " .$ ), the hadm_id should not contain a hospital identifier. Violations of these dependencies are identified using the attribute_dependencies function. The results reveal that $9 9 . 7 6 \%$ of cases where patients were admitted to the hospital have a valid hadm_id. However, $1 5 . 1 \%$ of cases in which patients were discharged home incorrectly retain a record of hadm_id. Invalid timestamp This quality issue relates to the validity of the recorded timestamps. To detect invalid timestamps, the time_anomalies function was applied to detect zero or negative case durations. The validation result indicates that no cases in MIMICEL exhibit invalid case duration. This observation also suggests that there is no violation of the logical order within the event log, as the "Enter" activity always occurs before the "Discharge" activity. Repeated activities with identical timestamps may appear to indicate imprecise timestamps21. To identify such cases, we apply the multi_registration function, which detects activities repeated at the same timestamp within a case. The validation results are summarised as follows: • Out of total 425,028 cases in MIMICEL, 304,369 cases have the activity “Medicine reconciliation” executed. Within this subset of cases, $8 7 . 0 7 \%$ (265,016 out of 304,369) exhibit repeated instances of the “Medicine reconciliation” activity occurring at the same point in time. This occurs due to unique medication information being recorded for each reconciliation, even when performed simultaneously. • “Medicine dispensation” was performed in 295,998 out of 425,028 cases in the MIMICEL. In this subset of cases, $6 8 . 8 4 \%$ (203,770 out of the 295,998) have repeated instances of the activity “Medicine dispensation” occurring at the same point in time. These repetitions arise because each occurrence represents the dispensation of a distinct medication, even when multiple medications are dispensed at once. • $6 0 . 1 3 \%$ (256,308 of the 425,028) of cases contain multiple instances of the activity “Discharge from the ED” executed at the same time. This happens because each occurrence is tied to a distinct diagnosis, and a single discharge event may involve multiple diagnoses. In MIMICEL, these repeated activities with identical timestamps are valid and not considered a quality issue. These repetitions occur because their corresponding attributes capture distinct details specific to each instance. The event log format (i.e., XES standard) allows only one value per attribute, requiring these repetitions to accurately capture the granularity of event-specific information. As a result, these repetitions highlight the richness of the data rather than indicating any issue in data quality. Outside domain range This refers to the identification of attribute values that fall outside the range of possible values. According to the description of the MIMIC-IV-ED dataset, patients’ self-reported pain levels are expected to be within a range of 0-10. Therefore, when detecting violations of the value range for these specific attributes, any values outside the range of 0-10 for pain level would be considered violations. To identify attributes with value that is out of domain range, we use the function attribute_range. The results reveal that $2 9 \%$ of cases contain values for pain that fall outside the range of 0 to 10. Inconsistent formatting This issue concerns the data values that do not conform to a consistent format. Consequently, timestamps of activities in MIMICEL such as “Medicine reconciliation”, “Medicine dispensation” and “Vital sign check” are accurate to minute level. The timestamps of activities “Enter the ED” and “Discharge from the ED” are accurate to the second level. Furthermore, the attribute “temperature” exhibits inconsistent data format. It is important to note that the majority of patient temperatures are recorded in degrees Fahrenheit, while some temperatures are documented in Celsius. Remark Based on the above validation, no errors were introduced during the log extraction process, and all the identified issues were inherited from the MIMIC-IV-ED dataset. Whether to handle these issues depends on specific analytical objectives. Refer to the next section for detailed examples. # Usage Notes This section presents analyses performed using MIMICEL, demonstrating the utility of the proposed dataset in addressing key questions related to the ED process. These analyses were inspired by frequently posed questions from ED experts on specific ED process activities, as summarised by Rojas et al.14. These questions were obtained through a combination of interviews with ED specialists and literature reviews, ensuring that the questions reflect the needs for improving ED operations and management14. Broadly, these questions are categorised into acuity-driven questions and ED Length of stay (LoS)-oriented questions. Acuity-driven questions focus on the urgency of care, which is assessed during the triage stage, a critical step in ED operations where patients are assigned acuity levels to determine the priority of required interventions. Accordingly, the first analysis focuses on acuity, aiming to uncover characteristics and differences between ED processes for patients with varying acuity levels. The second analysis investigates ED LoS, a key performance indicator for evaluating ED efficiency24. This analysis seeks to provide a comprehensive understanding of the process characteristics associated with ED visits, distinguishing between normal and prolonged cases. As a key challenge in ED, overcrowding can lead to prolonged $\mathrm { L o S } ^ { 2 5 }$ . Therefore, in the third analysis, we pay particular attention to the issue of overcrowding in ED. This analysis investigates the characteristics of ED processes under varying levels of crowdedness to better understand the impact of overcrowding on emergency care operations. Based on insights from ED specialists, certain activities, such as “Medicine reconciliation”, “Medicine dispensation”, and “Vital sign check” may occur during transportation to the ED (e.g., via ambulance or helicopter) and are repeated upon the patient’s arrival. In this work, we focus exclusively on ED activities that take place once the patient has entered the ED. Consequently, we filtered out events occurring at or prior to the event “Enter the ED”, resulting in the removal of 80,581 events (out of 7,568,824 events), approximately $1 . 0 6 \%$ of the total events in the MIMICEL dataset. All analyses presented were conducted using this filtered version of MIMICEL. # Acuity-based analysis In the MIMIC-IV-ED dataset, the assignment of acuity levels is guided by a five-level system of the Emergency Severity Index (ESI), which categorises patients based on the urgency of their condition. Level 1 represents the most urgent cases requiring immediate physician intervention, while level 2 patients are also at high risk, with their subsequent placement determined by nursing assessments. Patients at levels 3, 4, and 5 have gradually decreasing levels of urgency. The acuity-based analysis examines the influence of these acuity levels on ED process characteristics. During the validation of MIMICEL, it was identified that $1 . 6 4 \%$ cases have missing values for the attribute acuity. Consequently, these cases will not be covered in the following analyses. With access to event logs, process maps can be generated to visualise an end-to-end process. For example, Figure 4 depicts a process map for ED visits with an acuity level of 3. This category accounts for over $50 \%$ of cases in MIMICEL, making it a representative example. The process map, generated using the process mining tool called Disco1, provides a visualisation of the actual flow of the process for ED visits with this acuity level. Using colour and thickness coding, it highlights the relationships between activities, the most common paths (i.e., sequences of activities) taken by ED visits, and bottlenecks within the process. For instance, the path from triage to discharge is relatively uncommon, appearing in only $2 . 6 5 \%$ of cases (with acuity level 3). The median time interval between these two activities is 2.1 hours (displayed by the thick red arrow), which indicates a bottleneck in this process. Additional details on interpreting the process map are provided in the Appendix. Figure 4. The process map of ED visits with the acuity level of 3 Table 8 summarises the case coverage of three optional activities across different acuity levels. Table 9 presents the case coverage of paths observed across different acuity levels, and Table 10 provides the corresponding time interval between these activities. Table 8. The case coverage of three activities observed in cases with different acuity levels. A decreasing trend is observed in the frequency of “Medicine dispensation”, “Medicine reconciliation” and “Vital sign check” as the urgent level decreases (from acuity 1 to acuity 5). “Vital sign check” remains the most consistently performed activity, which occurs in $6 8 \%$ of cases even at acuity 5 cohort. In contrast. “Medicine dispensation” and “Medicine reconciliation” are significantly less frequent at lower urgency levels, with a notable decline from acuity 3 onwards. Table 9. The case coverage of paths observed in cases with different acuity levels. As the urgency level decreases (from acuity 1 to acuity 5), the frequency of most paths declines, with the exception of the path from triage to discharge. Consecutive vital sign checks are the most frequently observed paths across all acuity levels, occurring in $73 \%$ of cases for the acuity 1 cohort and dropping to $10 \%$ for the acuity 5 cohorts. Similarly, paths between “Vital sign check” and “Medicine dispensation” follow this downward trend, with higher frequencies at higher urgency levels (e.g., $6 5 . 2 \%$ and $6 8 . 8 \%$ for acuity 1) and significantly lower frequencies at reduced urgency levels ( $8 . 9 \%$ and $1 6 . 6 \%$ for acuity 5). In contrast, the path from triage to discharge, although the least common in urgent cases (e.g., $1 . 2 3 \%$ for Acuity 1), becomes more frequent with decreasing urgency, reaching $18 \%$ for acuity 5. Table 10. The Time Interval between activities observed in cases with different acuity levels. It is evident that the time intervals between activities generally increase as the urgency level decreases (from acuity 1 to acuity 5), except for the path from triage to discharge. The time interval for consecutive vital sign checks demonstrates a significant increase with decreasing urgency, commencing at 30 minutes for acuity 1 and escalating to 120 minutes for acuity 5. A similar upward trend is observed in the time intervals for the path between “Vital sign check” and “Medication dispensation”, with a decrease in urgency. A notable exception is observed in the interval between triage and discharge, which exhibits a reverse trend, decreasing from 126 minutes for acuity 1 to 60 minutes for acuity 5. A comparative analysis of Table 8, Table 9 and Table 10 reveals the following insights: # • Higher-acuity patients require more frequent and intensive medical interventions The activity "Medicine dispensation" is significantly more common in higher-acuity cases, observed in $8 1 . 5 \%$ of acuity 1 cases, gradually decreasing to $6 8 . 9 \%$ for acuity 3 and further dropping to only $2 7 . 6 \%$ in acuity 5 cases (Table 8). This trend indicates that higher-acuity patients often require intensive medical interventions to stabilise their conditions, while lower-acuity patients frequently require minimal or no medication at all. # • Lower-acuity patients require less frequent and less intensive vital sign monitoring ED visits with higher acuity levels require consecutive vital sign checks more frequently, reflecting the severity of patients’ conditions. For example, $73 \%$ of cases in the acuity level 1 cohort involves consecutive monitoring of vital signs, compared to only $10 \%$ in the acuity level 5 cohort (Table 9). Moreover, the interval between consecutive vital sign checks is significantly shorter for higher acuity levels, with a median duration of 30 minutes for acuity 1 and 120 minutes for acuity 5 (Table 10). These findings suggest that as acuity decreases, patient conditions tend to be more stable, necessitating less frequent monitoring of their vital signs. # • Higher-acuity patients require frequent and intensive cycles of medical interventions and vital sign monitoring ED visits with higher acuity undergo more frequent and rapid transitions between “Medicine dispensation” and “Vital sign checks”, reflecting the intensive care required to manage their critical conditions. For example, in the acuity 1 cohort, $6 5 . 2 \%$ of cases involve a transition from "vital sign check" to "medicine dispensation," and $6 8 . 8 \%$ involve the reverse transition. In contrast, these proportions drop significantly for acuity 5, where only $8 . 9 \%$ and $1 6 . 6 \%$ of cases involve these transitions (Table 9). The median time intervals between these transitions are shorter for acuity 1 (16 and 14 minutes) compared to acuity 5 (50 and 36 minutes (Table 10). These findings demonstrate that higher-acuity patients require frequent and tightly timed cycles of intervention, ensuring continuous monitoring and timely responses to their critical needs. Lower-acuity patients, on the other hand, experience fewer cycles with longer intervals, consistent with their more stable conditions. # • Lower-acuity patients follow simpler and faster care pathways. Lower-acuity patients are more likely to be discharged directly after triage , as seen in $18 \%$ of acuity 5 cases compared to only $1 . 2 3 \%$ of acuity 1 cases (Table 9). In addition, the time interval between triage and discharge is shorter for the acuity 5 cohort (60 minutes) compared to acuity 1 (126 minutes) (Table 10). This reflects the simplicity and efficiency of care for lower-acuity patients, in contrast to the prolonged pathways required for higher-acuity cases. # LoS-driven analysis ED Length of Stay is an essential metric that measures the time between a patient’s arrival at the ED and their physical departure from $\mathrm { E D } ^ { 2 4 }$ . LoS represents the total duration of a patient’s stay in the ED and serves as a key indicator of ED efficiency. To explore factors contributing to prolonged LoS in ED, we incorporated the patient’s acuity level. As previously discussed, acuity levels were assigned based on the five-level ESI system, where levels 1 and 2 represent high acuity and levels 3, 4, and 5 represent low acuity. An analysis of the LoS distribution in the dataset reveals that $7 5 \%$ of cases have a LoS of 500 minutes or less, which aligns with the internationally recommended acceptable ED LoS of $\leq 8$ hours internationally26. Consequently, 500 minutes was established as the threshold for normal LoS. Cases with a LoS exceeding 500 minutes, comprising the remaining $2 5 \%$ of cases, were classified as having a prolonged LoS. To facilitate the analysis, all cases were divided into four zones based on combinations of LoS (normal vs. prolonged) and acuity levels (high vs. low). Figure 5 provides a visual representation of these zones, helping to identify distinct characteristics of processes with varying LoS and across different acuity levels. Based on this figure, we observed that approximately $2 7 \%$ of cases fall into Q1, representing urgent cases with normal LoS, indicative of efficient management. In contrast, Q4, which accounts for $1 1 . 9 4 \%$ of cases, represents urgent cases with prolonged LoS. The majority of cases, $4 7 . 6 3 \%$ , fall into Q2, comprising non-urgent patients with shorter stays. A focused analysis of Q1 and Q4 enabled a detailed comparison of processes in normal versus prolonged LoS scenarios for urgent patients. The observations, illustrated in Figure 6 are summarised below: 1. Vital sign check self-loop Consecutive vital sign checks occurred in $8 8 \%$ of Q4 cases, compared to $58 \%$ in Q1, indicating more intensive monitoring for patients with prolonged LoS. 2. Time duration taken by the path from “Medicine dispensation” to “Vital sign check” The median duration of this path was twice as long as in Q1. Q4 also had a higher percentage of cases $( 8 3 \% )$ with this path compared to $57 \%$ in Q1. 3. Time duration taken by the path from “Vital sign check” and “Medicine dispensation” This path’s duration was twice as long in Q4 compared to Q1. While $50 \%$ of Q1 cases included this path, it was present in $80 \%$ of Q4 cases. These observations indicate that Q4 patients, characterised by high acuity and prolonged LoS, experience significantly more intensive vital sign monitoring and slower transitions between medicine dispensation activities compared to Q1 patients. This suggests that the prolonged LoS in Q4 may result from the increased complexity and duration of ED processes required to stabilise these patients. Subsequently, we investigate the processes within the Q4 zone. Given that the majority of cases in MIMICEL are discharged either to “Home” $( 5 7 \% )$ or “Admitted to the hospital” $( 3 7 \% )$ , we examine the processes with long LoS for these two dispositions. Consequently, we divide Q4 into two cohorts: the “Home Cohort” and the “Admitted Cohort.” Analysis revealed significant differences in process durations between these two groups, as shown in Table 11. Q2 (Non-urgent Quick): 47.63% Q3 (Non-urgent Slow): $1 3 . 2 8 \%$ AcuityLevel: 3,4and 5 1 Acuity Level: 3,4 and 5 Length of Stay = 500 minutes 1 Length of Stay $> 5 0 0$ minutes 1 1 1 +中+ Q1(Urgent Quick): 27.15% Q4 (Urgent Slow): 11.94% Acuity Level: 1 and 2 ot\*\*.\* Acuity Level:1 ahd 2 Lengthof Stay<=500minutes x=500 LengthofStay $>$ 500 minutes Length of Stay Figure 5. Distribution of ED visits in accordance with LoS and acuity levels Figure 6. Comparison of the process between Q1 Urgent Quick zone and Q4 Urgent Slow zone Table 11. Performance median duration (in minutes) of different paths for ED visits with different dispositions (Q4) Home Cohort vs Admitted Cohort: For cases in the “Home Cohort”, the median duration of consecutive vital sign checks was 120 minutes, nearly double the 64 minutes observed in the “Admitted cohort”. Similarly, the duration of the path from “Vital sign check” to "Medicine dispensation" was longer for the “Home Cohort” (61 minutes) compared to the “Admitted Cohort” (39 minutes). A comparable trend was observed for the path from “Medicine dispensation” to “Vital sign check”, with median durations of 53 minutes and 24 minutes for the “Home” and “Admitted” cohorts, respectively. These observations suggest that patients discharged home experience more prolonged monitoring and slower transitions between activities, possibly reflecting extended observation times or additional checks to confirm their readiness for discharge. In contrast, patients in the “Admitted cohort” may follow a more streamlined process with quicker transitions, likely due to their immediate transfer to hospital wards for continued care. # Crowdedness analysis ED overcrowding is a critical issue that arises when the demand for emergency healthcare services surpasses the capacity of an ED to deliver timely and suitable care to patients27. In our dataset, we lack explicit information regarding the ED’s capacity. Therefore, to address this limitation, we propose the establishment of crowdedness criteria derived from the statistical distribution of simultaneously treated patients within the ED. This criterion will indicate the level of overcrowding within the ED, allowing us to gain insights into the extent of the issue and its potential implications. # Technical definitions • Simultaneously treated patients: Let us consider an ED visit for a patient $p$ , whose enter and discharge times are denoted by $t _ { e }$ and $t _ { d }$ , respectively. Any patient who entered the ED after $p$ was discharged, or any patient who was discharged before patient $p$ entered the ED, cannot be considered as simultaneously treated patients with patient $p$ . Hence, a different patient $p ^ { \prime }$ (with enter and discharge times $t _ { e } ^ { \prime }$ and $t _ { d } ^ { \prime }$ ) can only be considered as a simultaneously treated patient with $p$ , only if the following logical statement holds: $p ^ { \prime }$ is simultaneously treated with $p$ if $\neg ( t _ { e } ^ { \prime } > t _ { d } \lor t _ { d } ^ { \prime } < t _ { e } )$ • Crowdedness threshold: For the purpose of crowdedness analysis, we determined the $7 5 ^ { t h }$ percentile of the distribution of the number of simultaneously treated patients in the ED as a threshold for determining ED crowdedness. If the number of simultaneous patients associated with a specific ED visit exceeds this threshold, we classify that patient visit as having taken place in a crowded ED. Applying this approach to MIMICEL, an ED is considered crowded when there are 12 or more simultaneously treated patients. Crowded ED vs non-crowded (normal) ED Figure 7 depicts a comparison between crowded vs non-crowded ED. We analyse he process mainly in terms of time intervals between activities. The following observations can be obtained. 1. Time interval between consecutive vital sign checks: For the “Home Cohort”, the time interval between consecutive vital sign checks is 2 hours compared to the 1-hour duration in the admitted cohort. 2. Time duration taken by the path from the second last activity to “Discharge from the ED”: In a crowded ED, the time duration taken to the discharge from the immediate precedent activity is longer. The difference is primarily visible in the path from the “Vital sign check” to discharge and the path from “Medicine Reconciliation” to discharge. Home cohorts vs Admitted Cohort An intriguing observation arose when the distribution of disposition cohorts was analysed between non-crowded and crowded ED. Within the entire “Admitted Cohort” consisting of 158,010 individuals, approximately one-third (48,156 patients) can be categorised as being treated in crowded ED conditions. On the other hand, within the “Home Cohort”, which includes 241,626 patients, only one-fourth (60,384 patients) are associated with crowded ED conditions. These findings indicate that the proportion of admitted patients is significantly higher in crowded ED settings. Based on these findings, we conduct a more in-depth examination of the observations regarding time intervals between activities, specifically examining the intervals between consecutive vital sign checks and between the second last activity and discharge, with a focus on disparities between the “Admitted Cohort” and “Home Cohort”. Based on Table 12, we can observe that the longer time interval of consecutive vital sign checks is mainly contributed by the “Home Cohort”. In terms of the time to discharge, the main difference between the two cohorts is visible in the path from the vital sign check to the discharge. In the “Admitted Cohort”, this duration is considerably longer (58 minutes), compared to the “Home Cohort” (22 minutes). In the other discharge paths, the two cohorts display similar time intervals. Figure 7. Comparison of the process between non-crowded and crowded ED Table 12. Time interval differences between activities for Admitted and Home discharge cohorts in a crowded ED # Code Availability The code for data extraction, XES log conversion, technical validation, and analysis can be accessed online through our GitHub repository (https://github.com/ZhipengHe/MIMIC-IV-event-log-extraction-for-ED). These scripts are publicly available to allow for reproducibility and code reuse. 1. Data Extraction (1_extract_eventlog): This folder contains PostgreSQL scripts for extracting the event log from the MIMIC-IV-ED database and exporting them as CSV files. It is important to note that the MIMICEL event log generated in this study is derived from the MIMIC-IV-ED dataset11. Consequently, valid access to the MIMIC-IV-ED dataset is required to use MIMICEL. The folder includes the following three SQL scripts: • 1_preprocessing.sql: preprocessing the MIMIC-IV-ED database and preparing for converting them to activities with timestamps. • 2_to_activity.sql: converting the processed tables in the MIMIC-IV-ED database into activity tables. • 3_to_eventlog.sql: combining all activity tables into a whole event log. • 4_clean.sql: cleaning invalid cases from event log where the attribute intime is no earlier than outtime. 2. XES Log Conversion (2_to_xes): This folder is running in a Python environment to convert the event log to the XES format. It includes a Python script and a Jupyter Notebook for log conversion: • csv2xes.ipynb & csv2xes.py: converting event log from CSV file to XES file. By the end of the data extraction and XES log conversion, the MIMICEL event log was obtained. This log serves as the foundation for subsequent analyses following a technical validation of its data quality. 3. Technical Validation (3_validation): This folder includes details of the R package DaQAPO, which was utilised to assess the data quality of the event log. • data_quality.Rmd: detecting event log data quality issues, such as missing values, incomplete cases, violations of activity order, etc. (see details in the Technical Validation section) • data_quality.html & data_quality_revised.html: storing the output report of technical validation. The revised version removes case lists in output report to improve the readability of HTML report. 4. Log Preparation (4_analysis/log_preparation): This folder includes SQL scripts for filtering and generating meaningf insights from the event log: • 5_insights.sql: generating insights from the event log, such as the length of stay and static attributes. • 6_filter.sql: filtering the event log by removing events that happen before or have the same timestamp as “Enter the ED” as analyses conducted in this work focus exclusively on ED activities that take place onsite. 5. Log Analysis (4_analysis/log_analysis) The extracted event log was used for further analysis with process mining tools and the Python environment. This analysis focused on: • acuity_cohorts.sql & throughput.sql: extracting sublogs by acuity levels and discharge types for further analyses. • acuity_LoS.ipynb & crowdedness.ipynb: demonstrating the usage of the filtered event log in the analysis (see details in the Usage Notes section). # References 1. Ibanez-Sanchez, G., Celda, M. A., Mandingorra, J. & Fernandez-Llatas, C. Interactive process mining in emergencies. In Interactive process mining in healthcare, 165–180 (2021). 2. Duma, D. & Aringhieri, R. An ad hoc process mining approach to discover patient paths of an emergency department. Flex. Serv. Manuf. J. 32, 6–34 (2018). 3. Savioli, G. et al. Emergency department overcrowding: Understanding the factors to find corresponding solutions. J. Pers. Medicine 12 (2022). 4. Brenner, S. et al. Modeling and analysis of the emergency department at university of kentucky chandler hospital using simulations. J. emergency nursing 36, 303–310 (2010). 5. Rebuge, Á. & Ferreira, D. R. Business process analysis in healthcare environments: A methodology based on process mining. Inf. systems 37, 99–116 (2012). 6. Delias, P., Manolitzas, P., Grigoroudis, E. & Matsatsinis, N. Applying process mining to the emergency department. In Encyclopedia of business analytics and optimization, 168–178 (2014). 7. Munoz-Gama, J. et al. Process mining for healthcare: Characteristics and challenges. J. Biomed. Informatics 127, 103994 (2022). 8. Martin, N. et al. Recommendations for enhancing the usability and understandability of process mining in healthcare. Artif. Intell. Medicine 109, 101962 (2020). 9. Cho, M. et al. Process mining-supported emergency room process performance indicators. Int. journal environmental research public health 17, 6290 (2020). 10. van der Aalst, W. M. Process Mining: Data Science in Action (2016). 11. Johnson, A., Bulgarelli, L., Pollard, T., Celi, L. A., Mark, R., & Horng, S. Mimic-iv-ed (version 2.2). PhysioNet https://doi.org/10.13026/5ntk-km72 (2022). 12. Jans, M., Soffer, P. & Jouck, T. Building a valuable event log for process mining: an experimental exploration of a guided process. Enterp. Inf. Syst. 13, 601 – 630 (2019). 13. Remy, S., Pufahl, L., Sachs, J. P., Böttinger, E. & Weske, M. Event log generation in a health system: a case study. In International Conference on Business Process Management, 505–522 (Springer, 2020). 14. Rojas, E. et al. Question-driven methodology for analyzing emergency room processes using process mining. Appl. Sci. 7, 302 (2017). 15. Andrews, R. et al. Quality-informed semi-automated event log generation for process mining. Decis. Support. Syst. 132, 113265 (2020). 16. Andrews, R., Suriadi, S., Wynn, M. T., ter Hofstede, A. H. M. & Rothwell, S. Improving patient flows at st. andrew’s war memorial hospital’s emergency department through process mining. In Business Process Management Cases (2018). 17. Asplin, B. R. et al. A conceptual model of emergency department crowding. Annals emergency medicine 42, 173–180 (2003). 18. IEEE. IEEE Standard for eXtensible Event Stream (XES) for Achieving Interoperability in Event Logs and Event Streams. IEEE Std 1849-2016 1–50 (2016). 19. Goldberger, A. L. et al. Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. Circulation[online] 101, e–215–e–220 (2000). 20. Berti, A., Van Zelst, S. J. & van der Aalst, W. Process mining for python (PM4Py): Bridging the gap between process- and data science. In Proceedings of the ICPM Demo Track 2019, co-located with 1st International Conference on Process Mining (ICPM 2019), 13–16 (2019). 21. Vanbrabant, L., Martin, N., Ramaekers, K. & Braekers, K. Quality of input data in emergency department simulations: framework and assessment techniques. Simul. Model. Pract. Theory 91, 83–101 (2019). 22. Bose, J. C. J. C., Mans, R. S. & van der Aalst, W. M. Wanna improve process mining results? 2013 IEEE Symp. on Comput. Intell. Data Min. (CIDM) 127–134 (2013). 23. Martin, N., Van Houdt, G. & Janssenswillen, G. Daqapo: Supporting flexible and fine-grained event log quality assessment. Expert. Syst. with Appl. 191, 116274 (2022). 24. Vanbrabant, L., Braekers, K., Ramaekers, K. & Van Nieuwenhuyse, I. Simulation of emergency department operations: A comprehensive review of kpis and operational improvements. Comput. & Ind. Eng. 131, 356–381 (2019). 25. Di Somma, S. et al. Overcrowding in emergency department: an international issue. Intern. Emerg. Medicin 10, 171–175 (2015). 26. Rose, L. et al. Emergency department length of stay for patients requiring mechanical ventilation: a prospective observational study. Scand. journal trauma, resuscitation emergency medicine 20, 1–7 (2012). 27. Sartini, M. et al. Overcrowding in emergency department: causes, consequences, and solutions—a narrative review. Healthcare 10, 1625 (2022). # Appendix # Interpretation of process maps The process maps in this study were created using Disco, a process mining tool2 designed for visualising and analysing processes discovered from event logs. To enhance the understanding of these process maps, we provide explanations of the notations used, offering essential background information. In the following, we outline the basic notations for a process map (as annotated in Figure 8). 1. Process start: The start of the process is represented by a triangle symbol located at the top of the process map. 2. Process end: The end of the process is denoted by a stop symbol. 3. Activities: Activities are depicted as rectangle boxes. Each box include two elements: • The median execution time of the activity. • The percentage of cases that pass through the activity. 4. Process flow: The flow between two activities is represented by an arrow. • Solid arrows indicate transitions between activities during the process. • Dashed arrows point to activities occurring at the very beginning or end of the process. • Each arrow is annotated with two types of data: – The median execution time of the path. – The percentage of cases that follow the path. Figure 8. The annotated process map of ED visits with the acuity level of 3
The global issue of overcrowding in emergency departments (ED) necessitates the analysis of patient flow through ED to enhance efficiency and alleviate overcrowding. However, traditional analytical methods are time-consuming and costly. The healthcare industry is embracing process mining tools to analyse healthcare processes and patient flows. Process mining aims to discover, monitor, and enhance processes by obtaining knowledge from event log data. However, the availability of event logs is a prerequisite for applying process mining techniques. Hence, this paper aims to generate an event log for analysing processes in ED. In this study, we extract an event log from the MIMIC-IV-ED dataset and name it MIMICEL. MIMICEL captures the process of patient journey in ED, allowing for analysis of patient flows and improving ED efficiency. We present analyses conducted using MIMICEL to demonstrate the utility of the dataset. The curation of MIMICEL facilitates extensive use of MIMIC-IV-ED data for ED analysis using process mining techniques, while also providing the process mining research communities with a valuable dataset for study.
[ "cs.DB" ]
# 1 Introduction # CCS Concepts • Software and its engineering $$ Automatic programming; Software maintenance tools; Software evolution. # Keywords Automated Software Engineering, Agentic Systems, AI for Software Development Large language models (LLMs) have shown promise in coding, reasoning, and problem solving. The model capability itself has improved significantly in terms of reasoning tasks, by training the models to always use reasoning paradigms such as chain of thoughts, without needing further prompting, as shown by GPT o1 [25] and its descendant models such as o3. On top of improved foundation models, the software engineering community is distilling known best practices for downstream tasks into amplifiers for LLMs through agentic systems [40]. An LLM agent for software engineering invokes various interfaces including testing and analysis tools autonomously driven by an LLM reasoning agent. While agentic systems inspire researchers [5, 40], start-ups [3, 26] and major companies [28, 34], we currently observe a strong specialization in the emerging technologies among the LLM agents for software. Agentless [38] performs program repair, Large Language Monkeys [9] produce unit tests, ExecutionAgent [8] performs a project setup, and so forth. These specialized agents are reasonable efforts, especially as they require specialized metrics and datasets. However, with many agents comes the need to manage and maintain them as part of the future development environments! We propose a unified Software Engineering agent (USEagent) representing a consolidated agentic capability for software engineering tasks. Individual SE tasks could be coordinated to result in a more effective ensemble of tasks: better test-generation should benefit reproduction in program repair, review of patches should lead to better generated code, etc. With an integrated software engineering agent, it becomes much more feasible to co-opt it in future development environments [2]. As a foundation, we need a unified dataset of challenging SE tasks to test our unified agentic capabilities, for which we build Unified Software Engineering Bench, or USEbench for short. Recently, the SWE-bench dataset [18] has been proposed that captures a set of GitHub issues depicted in natural language and requires bug fixes or feature additions in software projects. Thus the SWE-bench dataset is replete with challenges in software maintenance for various real-life software projects. Our proposed benchmark USEbench is a meta-benchmark composing multiple software engineering tasks (such as code generation, program repair, test generation) behind a unified application programming interface (API). Building a unified API is key, since it allows us to start thinking of a unified agentic capability which can handle different software engineering tasks. After constructing USEbench, we expand two popular agentic systems, one from industry and one from academia, AutoCodeRover [43] and OpenHands CodeActAgent [35, 36] to solve the tasks in USEbench. We consider this a natural evolution or progression of the flurry of research proposing manifold different agents for different software engineering tasks. Existing specialized agents usually employ a fixed workflow and approach the given task in a few pre-defined steps. For example, AutoCodeRover tackles program repair tasks using a fixed two-phase workflow consisting of fault localization and patch generation. Now, what does it take to generalize an existing program repair agent, with fixed actions and pre-defined workflow, so that it can act as a unified software engineering agent (USEagent)? For each task or problem in USEbench, the agentic systems are only provided with a description (i.e., a bug report or the documentation of a method to generate) as well as access to a containerized environment of the project. As such, our meta-benchmark reveals inherent challenges regarding taskidentification, workflow-configuration and measuring progress (esp. determining end-criteria). To address these requirements, we equip AutoCodeRover with a Meta-Agent, which is instructed to orchestrate the appropriate agents and construct a workflow on the fly. Over the course of this workflow, the Meta-Agent utilizes available actions to construct and maintain a project-state, constituting a structured consensus memory over the trajectories LLM components. Since OpenHands CodeActAgent is a general agent designed for solving general tasks, we consider it for baseline comparison, with no architectural changes applied. We report the efficacy of both agentic systems on USEbench including PASS@1 and PASS@5 results. We perform a detailed error analysis as well as a manual inspection of positives, to identify falsepositives and cases of over-fitting or memorization. On USEbench consisting of 1271 tasks including program repair, regression testing, code generation, and test generation, USEagent achieves an efficacy of $3 3 . 3 \%$ , which is higher than the $2 6 . 8 \%$ efficacy from the state-ofthe-art general agent OpenHands CodeActAgent. Specifically, on software maintenance tasks in SWE-bench-verified, USEagent has an efficacy of $4 5 . 6 \%$ which is similar to the $4 6 . 2 \%$ efficacy of the specialized AutoCodeRover agent, while being applicable to more types of tasks. On test generation tasks that AutoCodeRover could not be applied to, USEagent achieves $3 1 . 8 \%$ efficacy, demonstrating its versatility. We also make the USEbench benchmark public, to encourage further research in design of AI software engineers. # 2 Unified Software Engineering Benchmark To make progress towards building a unified agent for software engineering (USEagent), we first build a dataset of automated software engineering tasks. There is a rich environment of benchmarks, yet they are somewhat fragmented - they focus on specific types of software engineering tasks. Popular benchmarks encompass natural language issues for software (SWE-bench [18]), automated program repair (Defects4J [19]), code generation (REPOCOD [20]), test generation (SWT [23]) or documentation generation (CodeNet [27]), each accompanied by a metric to identify correct or optimal solutions. Next to their task, benchmarks often vary in scope too: the programming benchmarks often considered in foundational model evaluation sets (such as HumanEval [12] , CodeNet [27], BigCode [4], EvalPlus [21]) cover method-snippets or class-level code that is evaluated against known examples. Benchmarks like Defects4J [19] pose challenges for tools that require (correct) changes to a larger project, and are conceptually closer to experiences of developers. The recently proposed SWE-bench [18] offers a new angle on repository-scale changes by introducing natural language issues for program repair instead of a known test-suite. However, due to the natural language, some issues were too ambigious or unsolvable — spurring efforts of OpenAI to create a human-verified subset, called SWE-bench-verified [24]. Solving such natural language issues automatically brings the vision of a future AI software engineer closer to the capabilities of a human software engineer. In this paper, we propose USEbench, a unified software engineering benchmark as a meta-benchmark which moves beyond project-level code editing tasks suggested by SWE-bench. Our goal is to unify existing individual benchmark sets and design a comprehensive unified dataset capturing multiple SE tasks. Our proposal combines a set of existing benchmark sets, namely SWE-benchverified [24], SWT [23], REPOCOD [20] and REPOTEST. Out of these SWE-bench-verified captures tasks which involve code editing or program modifications to achieve bug fixing / feature addition. SWT bench presents validation tasks for testing real-world code fixes in software projects. REPOCOD is a recent benchmark which captures code generation tasks in a software project. REPOTEST is a not previously published derivative of REPOCOD: While REPOCOD encompasses repository level code-generation after removing a method body; REPOTEST removes tests for a specified method-body and evaluation is based on code-coverage. The exact composition of USEbench from the constituent benchmarks shown in Table 1. Six different task types are covered, and for each task type several instances are included in USEbench. As a notable technical contribution, we provide a unified interface for agentic interaction with the unified benchmark. Previous benchmarks like SWE-bench provide a suitable interface for evaluation of code editing tasks i.e., after providing a .json file tasks are streamlined; still there is significant work involved in interacting with a software project, or extracting data from the constituent files of a software project. Thus, LLM agents proposed to work on SWE-bench tasks need to make these adjustments to work with SWE-bench, e.g. see [26]. We would want our unified benchmark USEbench to possess a greater degree of usability, so that LLM agents can be evaluated and compared in terms of their efficacy in conducting the software engineering tasks in the benchmark. To alleviate the burden on researchers, the constituent benchmarks are conveniently plugged behind a common interface revolving around docker images, providing options to read files and execute commands, regardless of the task. Due to the unified nature of our benchmark USEbench, we can also imagine unified software engineering agent (USEagent) working on the tasks in USEbench. Such a unified agent will be able to achieve complex software engineering tasks, which are solved via a sequence of the software engineering task types covered in the benchmark USEbench. We now show two concrete scenarios of more complex software engineering tasks constructed from primitive task types in USEbench, namely incomplete fix and feature development, elaborated in Table 1. To concretely see how the different task types in Table 1: Summary of primitive and compound tasks unified in USEbench USEbench are combined into compound tasks, let us consider the scenario of handling an incomplete code fix. In such a scenario - we (i) initially test the code, find failing tests, (ii) then generate a fix for handling the failed tests (iii) test the fixed code and find more failed tests (iv) finally generate a fixed code which passes all tests. These four steps could be four different runs of a unified software engineering agent USEagent, and could combine different task types from USEbench, namely code fixing and test generation. The unified API of tasks in USEbench allows us to combine and alter tasks to capture more complex software engineering challenges, like incomplete fixes. Another scenario of a complex software engineering task, and focus of both industry and research [36], is the complete addition of a feature to a code-base. This includes a problem description, and requires a functional method-body as well as accommodating tests. How to solve the task, as well as any form of decomposition, re-iteration or co-evolution is left up to the users of the benchmark. The final evaluation of this derived scenario will be against a hidden test-suite, as well as the generated tests. The GitHub repository of USEbench will be made publicly available soon. # 3 Background: Agents for Software Engineering Agents typically utilize LLMs for decision making and content generation, along with autonomously invoked tools to interact with external entities. These tools allow the agent access to knowledge beyond the LLMs’ training data, and allow the agent to influence the external environment through the tool interface. Agents often employ reasoning frameworks to orchestrate the LLM and the tool usage. One of the popular reasoning frameworks used in agents is ReAct [40], in which the LLMs are instructed to reason about output-traces and invoke tools in an interleaved manner. In software engineering, the target system that an agent interacts with is usually a software project. Another component of the external system is an execution environment, where software can be executed and tested. Existing software engineering agents (SE agents) proposed different ways to interact with the software project and the execution environment. AutoCodeRover [43] uses a set of project structure-aware tools (e.g. search_method_in_- class to navigate the software codebase and gather necessary code context. It further performs test execution and spectrum-based localization (SBFL) [37] to pinpoint more relevant code locations. SWE-Agent [39] employs file-based tools to navigate files in the project, for example find_file, open, scroll_down, and together they form an agent-computer interface [39]. OpenHands CodeActAgent [35, 36] provides a set of basic tools such as CmdRunAction to execute arbitrary bash commands inside a sandbox environment. A similar approach was followed by Google Jules agent [14]. Roughly summarized, the existing agents differ (intentionally) in how much structure and domain knowledge they introduce. Another aspect of agent design is the decision framework. Existing SE agents have explored options for more controlled decisionmaking beyond the ReAct framework, in the context of software engineering. AutoCodeRover [43] and Agentless [38] targets software maintenance tasks, and they break the agent’s execution into phases like context retrieval/localization and repair. Another example is RepairAgent [7], which restricts available tools through a finitestate machine. The different designs in existing SE agents raise an important question – how much autonomy should be given to the agent during decision making? On one hand, agents can follow a fine-tuned workflow for the targeted software engineering (SE) task (e.g. in AutoCodeRover, Agentless, RepairAgent). In this design, agents operate in fixed phases (e.g. localization is the first phase for program repair), which makes them more cost-effective [38, 43]. With the task broken down into phases, it also becomes easier to leverage program analysis techniques in each of the phases. However, agents with this design have less autonomy, and as a result, they could not be directly applied to another kind of SE task. Applying a program repair agent to test generation would require drastic changes to its workflow. On the other hand, agents can be given more autonomy by having access to a general set of tools and the freedom to decide when to use these tools (e.g. in SWE-Agent and OpenHands CodeActAgent). This general design makes the agents applicable to many kinds of software engineering tasks. However, this generality makes it harder to exploit domain knowledge and program analysis in the agents. Moreover, when the task becomes complex and the agent execution gets longer, it is challenging to interpret the agent trajectory consisting of a list of general tool usages. In comparison, agents operating in “phases” are more interpretable, since we can inspect the results of each phase (e.g. what is the identified faulty location at the end of a localization phase). In this work, our goal is to have an agent applicable to multiple types of software engineering task, while still utilizing domainspecific optimizations and interpretability. We name this agent Unified Software Engineering Agent (USEagent). # 4 Unified Software Engineering Agent The first step towards a unified software engineering agent is breaking down the fixed workflow of existing agents into components (i.e., phases) and add orchestration to decide which component to invoke based on the task type and task state. To achieve this, we propose a Meta-Agent, a central LLM-reasoning agent that orchestrates various components, henceforth called actions. The Meta-Agent and the actions together form a unified software engineering agent (USEagent) that is capable of solving several types of software engineering tasks with a higher level of autonomy. We identify several challenges in designing such a unified agent to solve tasks in USEbench. The first challenge is to adapt to tasks beyond a fixed workflow. Previous agents extract actions such as context retrieval or patch generation as clear “units of work” from developer workflows and organize them into state-machines or fixed workflows. This rigid organization of actions restricts their adaptability to other tasks. To achieve high agent autonomy we allow the actions to be freely composed and introduce a Meta-Agent that orchestrates the actions using a ReAct-style loop, forming a meta-layer over the actions, shown in Figure 1. Figure 1: Concept: Meta-Agent abstracting over actions Starting from natural-language descriptions, the Meta-Agent invokes actions, observes the changes made by the action, reasons about the output, and decides on the next action to take. This framework enables on-the-fly composition of workflows, adjusting to different tasks, errors and changing states. ReAct-style decision making has been widely used in previous agents to orchestrate tools that directly interact with the codebase; which we extend towards action orchestration. Subsequent to the autonomy is the question of termination: A USEagent must be able to terminate ad libitum when the task is considered completed. We suggest a termination action, that the Meta-Agent can choose to invoke. The overall trajectory and choices of the Meta-Agent will construct an action-graph. In our action-graph, any node can be the start or end-point - for example, depending on the task, an action of “executing tests” can be the starting point or the last assurance before termination. The agent execution can be seen as an implicit construction of this action-graph on the fly. The next challenge is to provide the Meta-Agent with a set of reliable, yet flexible actions that are suitable for composition. Existing general agents such as OpenHands CodeActAgent [36] and Google Jules [14] provide the LLM with a console to execute any command. The action of executing a console command is extremely flexible, but hard to control. The agent’s execution consists of a sequence of console commands, which are difficult for human developers to interpret and raise (security) concerns. Thus, we propose to design our actions at a coarser granularity, where each action encapsulates a “unit of work” that developers carry out during the software engineering process. Each action receives instructions from the Meta-Agent on what outcomes it is supposed to achieve, rather than how it achieves them. The Meta-Agent and individual actions communicate through this intent-based interface. An example action is EditCode, where the Meta-Agent specifies what should be edited in the code, but not the exact details of making the edits. This design results in a set of modularized actions with clear responsibilities, that can be tested, extended and maintained individually. Moreover, when one existing action is improved, the improvements propagate across all involved task types. We present a sample list of actions for unified software engineering agents in Section 4.1. The third challenge is knowledge management: For complex tasks, the information provided a priori is not sufficient, and a central effort in solving tasks is identifying and producing relevant information. Once produced, this context should be made available to the agentic system, i.e. Meta-Agent and other actions. Existing work has defined this problem as memory management of agents [17, 44], where memory refers to historical information relevant to the current task. In literature on LLM agents, memory is often categorized into short-term (e.g., information within an ongoing conversation), long-term (e.g., historical data from previous sessions), and consensus memory (e.g., shared knowledge between multiple agents) [17]. In this work, we explicitly identify what information to memorize when designing a unified software engineering agent, and classify them into the different types of memory. We categorize the results of actions as short-term memory, which is only presented to the Meta-Agent to decide the next action. Projectspecific knowledge, such as developer-written documentation, is considered long-term memory. These documents are embedded and stored in a vector database accessible for Retrieval-Augmented Generation (RAG) by the agent. Finally, artifacts generated by actions that are useful in guiding other actions constitute consensus memory. Such artifacts include program locations or code edits that can be notable to other actions. We introduce a structured task state that represents the consensus memory, which can be read and modified by different actions. Figure 2: Overview of AutoCodeRover. AutoCodeRover composes a fixed workflow for program maintenance tasks. # 4.1 Instantiating AutoCodeRover as USEagent In this section, we highlight steps to build the unified SE agent USEagent that handles multiple types of SE tasks, from an existing SE agent that only focuses on one task type. We build our USEagent on top of the open-source agent AutoCodeRover [29, 43], since AutoCodeRover operates in different phases which are suitable as starting points for actions. We elaborate the changes to AutoCodeRover and flesh out our proposed solutions to the challenges in Section 4. Fine grained implementation details, prompts, etc. can be found in the supplementary material. AutoCodeRover is an LLM agent that targets program maintenance tasks. Figure 2 shows the overall workflow of AutoCodeRover. Starting from a natural-language description of a software issue, AutoCodeRover first writes an executable self-contained test that reproduces the issue (Step $\textcircled{1}$ ). The stacktrace of the reproducer test, together with the task description, is sent to a context retrieval component to identify relevant code locations (Step $\textcircled{2} ,$ ). The relevant program locations are made available to the Edit Code component for writing a candidate patch (Step $\textcircled{4}$ ). With both LLM-generated patch and reproducer test, the Review Patch component then executes the tests on the patched program, decides whether the are correctly written, and iteratively improves them (Step $\textcircled{5}$ ). Finally, the AutoCodeRover workflow finishes when the reviewer approves a patch or the execution reaches a pre-defined round limit (Step $\textcircled{6}$ ). AutoCodeRover proved effective for resolving software issues such as bug fixing [29, 43]. However, these fixed transitions among components make agents like AutoCodeRover not generalizable to task types that require a different workflow. For example, in a task like test-suite generation, steps such as Reproduction and Review Patch are irrelevant, requiring the design of a new workflow or even a new agent. Instead of designing a new agent for each task type, our goal is to build one general agent that handles multiple task types. To achieve this, we disassemble the workflow of AutoCodeRover and reassemble it into USEagent shown in Figure 3. We adapt components in AutoCodeRover such as Code Retrieval into actions that can be freely composed by a Meta-Agent for different task types. Given the task description that mentions the task type in natural language, the Meta-Agent uses ReAct-style reasoning to select next actions to execute. Short-term feedbacks such as action execution results are reflected directly to the Meta-Agent, while artifacts that should be preserved longer are stored into a task state representing the consensus memory among actions. We next discuss each component of USEagent in more details. Figure 3: Overview of the USEagent and workflow. The Meta Agent chooses available actions, provides the state and retrieves a altered state until termination is chosen. Actions. Given the tasks of program repair, regression testing, code generation, and test generation in USEbench, we identify a priming set of actions for USEagent. The set of actions are shown in Table 2. Each action exposes its interface consisting of the description, input and output to the Meta-Agent, but does not expose execution details within the action. In addition to the input/output interaction with Meta-Agent, each action can also read/write to the task state. Many of the actions in Table 2 come from existing agents like AutoCodeRover (Figure 2), and incorporating them into USEagent only involves specifying their input/output interface. On top of the existing agents, we designed a few new actions that are essential for the task types in USEbench. These new actions include TestRetrieval and ExecuteTests. The TestRetrieval action explores the codebase and retrieves testcases relevant to the current task. It works similarly compared to the existing CodeRetrieval, which employs a set of search tools such as search_func to search for relevant code units in the program. The ExecuteTests action executes parts of the project test suite, which is useful for validating code/test changes by execution. Current agents focusing on SWEbench usually assume the commands to run existing unit tests are given as an input to the agent [29, 38]; however, this assumption requires additional manual configuration when setting up the agents on new projects. Our ExecuteTests action does not assume the test commands to be given, and instead it queries the project documentation files to retrieve the project-specific commands through RAG. In addition, we added a special Finish action that signals the termination of the agent execution. The Finish action is invoked with an argument specifying the final result for the current task. Unlike other actions which encapsulates concrete workflows, the Finish action only terminates the agent and outputs its argument as the final result from the agent. These actions can be invoked multiple times and combined in various ways to complete a given task. For example, for a code generation task, the Meta-Agent may first invoke CodeRetrieval to understand the surrounding code context, invoke EditCode to draft the implementation, invoke TestRetrieval to find out where existing tests for this code component resides, and then invoke EditCode to write new tests for the newly added code. It may then enter a refinement loop of improving the new code and tests using the ExecuteTests and EditCode actions. We note that this sequence of action invocation can happen without prior configuration, as we will show in section 6.3. Table 2: Details of actions used in USEagent. Diff denotes a Patch to the project, and subscript specifies modifiers. E.g. DiffCode expresses a change to program code, instead of test code. Task State. Our next contribution towards a USEagent is the design of a task state. The task state represents the consensus memory among the actions - each action can write new artifacts generated by them into the task state so that the artifacts can be visible to other actions. Our task state captures essential artifacts involved in a software debugging process, including relevant locations, test execution information, and prior attempts of code modifications. Specifically, the task state $s$ is defined as: $S = ( L _ { c } , L _ { t } , R _ { e x e c } , D S )$ . Here, $L _ { c }$ represents the relevant code locations (e.g., program methods and classes); $L _ { t }$ represents the test locations (e.g., unit test methods). The locations are identified by the retrieval actions, and are utilized by downstream code editing actions. $R _ { e x e c }$ contains the execution results of different tests against different candidate patches. These execution information provides guidance to other actions in improving partial solutions and selecting the final solution. The last part of the task state is a diff store $( D S )$ , which records all the generated diff contents from the EditCode, Reproduction and ReviewPatch actions. These diff contents include modifications to both the program code and the test code. The diff store is necessary to allow for versatile invocations of the ExecuteTests and EditCode actions. By selecting different subsets of diffs in $D S$ and passing them to ExecuteTests, tests can be executed on different versions of the project after applying the selected diffs. Similarly, EditCode can first apply an existing patch from $D S$ , and subsequently introduce additional modifications, thereby enhancing partial patches. MetaAgent. The last important addition is the Meta-Agent which orchestrates the various actions. As shown in Figure 3, Meta-Agent takes in the task description and an initial empty task state (Step $\textcircled{1}$ ), and then iteratively selects an action from the action space to execute (Step $\textcircled{2}$ ). Each action represents an encapsulated workflow to complete a “unit of work”, and exposes its input/output interface to the Meta-Agent. At each step of action selection, we present the Meta-Agent with the current task state, the overall task description, as well as the output from the previously invoked action. The invoked action modifies the task state and retunrs it back to the Meta-Agent (Step $\textcircled{5}$ ). This interleaving of reasoning (i.e. select the next action based on current output and state) and action (i.e. execute the actual action to obtain new observations) forms a ReAct-style loop at the action level. When the Meta-Agent deems that one of the diffs in the diff store is a satisfactory solution to the given task, it invokes the Finish action to end the agent execution and output the final solution. In the case where the Meta-Agent could not decide on the final solution (i.e. could not invoke the Finish action) after a pre-defined limit for the ReAct-loop, we ends the execution and invoke the LLM to select one of the candidates in the diff store as the final solution. # 4.2 Configuring OpenHands OpenHands CodeActAgent [36] is a general-purpose agent designed to solve general tasks1. It utilizes a structured controlleragent-runtime architecture, where the AgentController serves as a supervisor that enforces operational constraints (conversation iterations, budget) and manages the agent’s lifecycle (start/stop/- pause), while the CodeActAgent itself makes decisions, interpreting LLM responses and converting them into actions to interact with the runtime, which are isolated sandbox environment. Since CodeActAgent does not rely on a predefined task-specific workflow, it can be applied to a wide range of software engineering tasks, including those in USEbench, without requiring modifications to its architecture. During execution, OpenHands CodeActAgent operates iteratively, generating actions, executing them, and processing observations before determining the next step. The actions include: CmdRunAction: execute Linux bash commands. • IPythonRunCellAction: execute Python code in Jupyter or IPython environment. FileEditAction: read, write, or edit files in the runtime. • AgentFinishAction: signal task completion or termination. The execution of CodeActAgent terminates when the agent issues an AgentFinishAction, reaches resource limits (e.g., maximum iterations), or encounters an error. In practice, we design different prompt templates for each task type as input to OpenHands CodeActAgent, incorporating both the task type and task description. Compared to USEagent, the OpenHands CodeActAgent employs a similar reasoning framework, but the actions operate on a lower-level such as execution of a single bash command. It follows a single ReAct-loop centered around a more open set of available actions. This free-wheeling approach is an alternative to the more structured and pronounced options in Section 4.1. # 5 Research Questions & Experiment Setup Our first research question focuses on evaluating and comparing agentic systems for tasks beyond program repair. # RQ1: Efficacy of Agentic Systems How well do state of the art agentic systems work when facing varying software engineering tasks from USEbench? We answer RQ1 by adapting AutoCodeRover and OpenHands before applying them to all data points outlined in Section 2 and report a PASS@1. In addition to the efficacy, we also report an investigation of the major obstacles for the individual benchmarks. To gather information on retrys and estimate the effect of randomness, we perform a PASS@5 on a significant subset of the datapoints. A known issue and motivation for our next research question is overfitting in plausible patches. Such patches pass the evaluationcriteria by bypassing the harness with tricks rather than benign functionality. Sometimes, actual solutions are checks for edge-cases - these can only be distinguished from overfitting by consulting the ground truth (i.e., the gold patch). We aim to identify (and quantify) false-positives and other cases of overfitting solutions. # RQ2: Analysis of Plausible Patches What is the rate of false positives among the resolved patches? We sample a significant subset2 of solved datapoints and manually examine whether the plausible solution is overfitting. During this manual investigation, we also look for memorization in plausible patches, i.e. patches that perfectly mirror the ground truth developer patch or contain suspicious content. As described in Section 4, one milestone is to make the agent adapt autonomously to different tasks. We study this adaptation by investigating the resulting sequence of actions in RQ3: RQ3: Effectiveness of self-configuration Can USEagent self-configure to different task types? We present the sequence of actions of the USEagent and accumulate their patterns per task. We showcase examples on how these top-level patterns translate into commands, queries and edits. Table 3: Efficacy of USEagent and OpenHands CodeActAgent on USEbench. Lastly, we identify open challenges in the current unified software engineering agent. We formulate key difficulties and discuss potential future directions to address them. RQ4: Open challenges for agentic systems What are the challenges faced by the current unified agent in tasks in USEagent? Can we identify actions to remediate them? Experiment setup. We evaluate agentic systems such as USEagent and OpenHands CodeActAgent on the full USEbench which consists of 1271 datapoints. In our experiments, all agentic systems use Anthropic Claude 3.5 Sonnet v2 (claude-3-5-sonnet-20241022) as the backend LLM. For USEagent, we set the maximum rounds of action invocation from the Meta-Agent to 20, and force the agent to end execution and select a candidate solution. We set the model temperature to zero in USEagent. To better understand the effect of randomness, we randomly sampled a statistically significant subset (295 instances) to report PASS@5 results from both agentic systems. # 6 Results # 6.1 RQ1: Efficacy of Agentic Systems An overview of the efficacy of the agents on USEbench are shown in Table 3. We investigate the results per system. USEagent. In general, USEagent demonstrates its applicability across different types of tasks in USEbench. USEagent achieves a PASS@1 of $4 5 . 6 \%$ on SWE-verified. In comparison, AutoCodeRover, a more specialized agent on issue resolution, achieved efficacy of $4 6 . 2 \%$ on the same SWE-verified benchmark when using the same Claude 3.5 Sonnet v2 backend LLM [1]. USEagent relaxed the fixed workflow in AutoCodeRover and is a more general agent that applies to more task types. Although being more general, USEagent demonstrates similar efficacy on the specialized issue resolution task compared to AutoCodeRover. This comparable performance demonstrates that, with the design of USEagent, the increased autonomy does not result in noticeable reduction in efficacy. While being effective on issue resolution tasks, USEagent can handle other tasks which specialized agents could not be applied to. For issue reproduction tasks in SWT, USEagent resolved $4 0 . 3 \%$ of the tasks. Here, “resolved” means USEagent generated test cases that cover all changed lines in the developer-written patch for the issue, without seeing the patch. Moreover, for a significant number of unresolved tasks, USEagent generated test cases that achieve a coverage above $9 0 \%$ . In these cases, the generated tests do not fully satisfy the requirement of the benchmark, but only miss coverage on individual statements or branches. On test generation tasks in REPOTEST, USEagent achieves efficacy of $3 1 . 8 \%$ , which is slightly lower than similar tasks in SWT. While the requirement in SWT is to cover a developer-written patch, the requirement in REPOTEST is to generate tests that cover an entire method, which can be more challenging. There are generated tests that cover most scenarios in a method, but do not cover all the lines. For example, one pattern in the unresolved tasks is that the agent provides only one positive test-input that executes the “normal” paths of a method, followed by tests that raise exceptions. In contrast, the ground-truth tests written by developers not only focus on exception paths, but extensively test the “normal” inputs. We argue for certain advantages in agent-generated tests over the developer-written tests; the agent often generates shorter tests where each test has individual assertions, while the developer-written tests tend to consist of long tests that combine multiple inputs and assertions. In a real test-suite, short individual tests can provide more informative error messages when part of the test-suite fail [31, 33]. Tasks in REPOCOD proved to be the most challenging with only $6 \%$ resolution rate. REPOCOD tasks are generally challenging because the requirement is to generate a complete method implementation that can pass a large number of hidden test cases. Overall, for many unresolved instances, the generated method passes a high number of tests, yet does not account for a few edge-cases. The majority of resolved instances stem from Sympy, a library for symbolic mathematics. We expect a relation between the relative success in Sympy and the fact that mathematical reasoning is becoming a standard element of training data in foundational models. On the compound benchmark SWETRY, USEagent has a resolve rate of $8 \%$ , which shows some promise to apply USEagent as a follow up to partial fixes. We need to stress that the datapoints in SWETRY are sourced from previous failed attempts of agentic systems, which means they are from the more challenging instances in SWE-verified. We observe two main failure modes on the SWETRY tasks. Firstly, since the partial patch is given to the agent as part of the task description, the agent often attempts to derive a solution on top of this partial patches by making modifications to it. However, the partial patch may be misplaced. In these cases, the agent still chose to iterate over this patch, and failed to discard the partial attempt and approach the task in a different way. Secondly, the Meta-Agent sometimes ends the agent execution prematurely. For example, when the ExecuteTests action still shows some relevant test errors, the Meta-Agent wrongly disregards errors as irrelevant to the task, ending the workflow. This can be addressed by configuring the prompt, at the cost of longer execution time and potential overfitting in other benchmarks. OpenHands. OpenHands CodeActAgent achieves a PASS@1 of $3 8 . 4 \%$ on SWE-verified, which is lower than the reported result from the SWE-Bench leaderboard [1]. We attribute this difference to the choice of agent - the reported result on the leaderboard is generated using a specialized CodeActSWEAgent, while we used the more general CodeActAgent in our evaluation. We utilize the general CodeActAgent to solve the different types of tasks in USEbench. For SWT, CodeActAgent achieves a resolution rate of a $2 8 . 4 \%$ , slightly lagging behind USEagent. Similar to that of USEagent, CodeActAgent only achieves a low resolution rate on REPOCOD $( 5 . 5 \% )$ . However, unlike USEagent, resolved instances are evenly distributed across sympy, astropy (a popular library for Astronomy and Astrophysics), and Plotly (a graphing library), each contributing $2 7 \%$ of the total resolved cases. On REPOTEST, OpenHands CodeActAgent resolves $2 6 . 0 \%$ of the total instances, which is similar but slightly lower compared to its performance on SWT. This trend is consistent with that of USEagent. On SWETRY, CodeActAgent only solves $7 \%$ , similar to that of USEagent, indicating that improving partial fixes is still a difficult task for general agents. PASS@5. Moving to a PASS@5 increases the efficacy of USEagent to $4 9 . 5 \%$ $\left( + 4 8 . 6 \% \right)$ and OpenHands to $4 4 . 1 \%$ $( + 6 4 . 6 \% )$ . This shows a similar increase across the two systems when employing global retries. Both systems largely benefit when facing edge-cases, e.g. to cover the last statement in testing or finding a correct reproducer. Summary: RQ1 - Efficacy of Agentic Systems USEagent solves the tasks from USEbench at a $3 3 . 3 \%$ rate, compared to $2 6 . 8 \%$ of OpenHands. There is a consistent improvement over all benchmarks. USEagent’s efficacy of $4 5 . 6 \%$ for SWEverified is close to the performance of fine-tuned systems. # 6.2 RQ2: Dissection of Plausible Patches For USEagent, we manually investigated a sample of 218 plausible solutions (i.e. passing the resolution criteria of each benchmark) to understand the degree of memorization, overfitting or other anomalies. We report a low number of 3 cases of memorization $( 1 . 3 \% )$ , next to 23 cases of overfitting $( 1 0 . 5 \% )$ . The percentage of overfitting solutions is lower than the previously reported overfitting rate of $3 1 \%$ on SWE-bench from AutoCodeRover [29]. We attribute the lower overfitting rate in part to our extensive ExecuteTests action in USEagent, which allows for more versatile test execution. As a result, we see an overfitting rate of $1 3 . 6 \%$ from USEagent on the SWE-verified dataset. Another reason for the lower overfitting rate is from USEbench: the SWT and REPOTEST datasets show low numbers of overfitting, which further reduces the overall rate. Overfitting patches usually bypass tests using if-else for early return values. Sometimes, this matches the intended handling of edge-cases, whether such patches are overfitting was determined by distance to ground truth. Furthermore, we identified a set of 33 anomalies that are plausible, and do not overfit, yet they are suspiciously different from the ground truth. One example anomaly includes the introduction of a new vector library for xarray (a vector library itself). Another example is an unorthodox classinheritance check for scikit-learn - instead of utilizing pythons standard function isinstance, a predicate is applied to the objects obj.__classes__. These solutions are functionally correct but are unlikely to be acceptable by developers, so we have marked them as anomalies. Summary: RQ2 - Analysis of Plausible Solutions We manually inspected 218 plausible solutions from USEagent and report $1 0 . 5 \%$ overfitting, $1 . 3 \%$ memorization. Overfitting was most common within SWE-verified, and least common in REPOTEST and SWT. # 6.3 RQ3: Self-Configuration Figure 4 shows the different choices of the Meta-Agent when facing different benchmarks (we present two for brevity). The histograms display the actions taken first, second, etc. including the termination action Terminate. We observe some clear patterns in the different benchmarks that match our intuition. For program repair tasks (i.e. in SWE-Verified), we see in Figure 4a that the first step prominently consists of generating a reproduction test, followed by retrieving relevant code context. Over the course of agent execution, the EditCode and ExecuteTests become more prominent, introducing and verifying code changes to the software. For regression testing (in SWT), Figure 4b shows that by-large the first step chosen is TestRetrieval, immediately followed by a (test)code edit. The majority of USEagent trajectories on SWT tasks then consists of alternating test-changes and test-executions, as shown in the histograms. Although we observe a pattern in the sequence of action invocations, some invocations seem noisy. For example, Reproduction action is to generate a system-level reproducer test, but it is still invoked in some of the SWT trajectories (the requirement from SWT is to generate unit-level tests). We manually investigated some cases and found that the reproducers were later transformed into unit-level tests. Nevertheless, the sequence of action invocation follows a certain pattern for each task type as shown in Figure 4, showing that USEagent and self-configure its workflow based on the task type. Across all benchmarks, USEagent first gathers information (using the retrieval actions or reproducers), converging towards alternating EditCode and ExecuteTests until either budgets are exhausted or Terminate is chosen. This behavior showcases the capability of emulating concepts like Test-Time Scaling [9] and command-discovery [8] through the Meta-Agent. # Summary: RQ3 - Self Configuration USEagent is capable of choosing different action-patterns for different benchmarks: Program repair (SWE) starts with Reproduction, regression testing (SWT) with TestRetrieval. Initial steps generate information before the actions converge towards alternating EditCode and ExecuteTests until termination. # 6.4 RQ4: Open Challenges in Agentic Systems Based on our previous investigations, we identify several major challenges in the current unified software engineering agent. Firstly, when the task requires writing a large amount of code (e.g. feature development tasks), the agent often generates solutions that are “mostly correct”. This is reflected on our evaluation on the REPOCOD dataset, where the task is to implement a complex function based on natural language requirement, and the correctness is defined as passing a large number of hidden unit tests. We observe a number of solutions generated by the agent implement the required feature almost correctly, but fail a few hidden tests due to missing edge cases. The edge-case behaviors are sometimes not specified clearly in the natural-language task description. We attribute this challenge in part to the inherent ambiguity of natural language. A future agent may first resolve the ambiguity in natural language by transforming the task requirements into more “formal” artifacts such as specifications and test cases, and then generate solutions based on the formal specification. Another approach is to design human-agent interaction schemes to clarify ambiguity when needed. Secondly, the current agent lacks the capability of “backtracking” when the agent execution does not yield meaningful results on a given execution path. For example, when there is a partial patch at a particular program location, the agent is prone to continuously make small edits to the partial patch and execute tests to verify them. However, the agent may not make good progress with this approach, since it may be impossible to craft a good solution at the location of the partial patch. This issue is amplified in the SWETRY dataset, in which the task description comes with a partial patch. To tackle this issue, a future agent should employ a backtracking mechanism to discard unpromising partial solutions, and start over at a previous step. It could be possible to employ the recent reasoning LLMs to examine the agent execution trace and decide whether to backtrack, since reasoning models have shown backtracking behaviors in their thinking process [16]. Moreover, recent work [5] on employing search algorithm on agent execution trajectories can also help the agent to escape from unpromising paths. Lastly, we still observe patch overfitting in tasks such as program repair and regression testing. Overfitting is a known problem for automated program repair [15]. Overfitting happens due to the generated patches passing given tests, but missing actual requirements. Thus, to avoid overfitting, if the tests are too specific, we need to generalize them. Agentic systems show promise for test generation (also reflected in this work), and the framework introduced by USEagent allows for additional agents that cover test-amplification or mutation testing, adversarial reasoning or test carving. The produced artifacts will feed back into test-execution, code generation, review, etc. and reduce overfitting as well as increasing efficacy and trustworthiness. # Summary: RQ4 - Open Challenges Challenges consist of exploring edge-case handling and alternative solutions in feature development, backtracking and discarding poor partial results in repair, and repair-overfitting. We suggest search-approaches for the first, and agents for testamplification for the latter. # 7 Related Work SE - Benchmarks. Recent Software Engineering Benchmarks fall into two broad categories: Isolated coding exercises and repository level tasks. Some coding exercises like CodeXGlue [22] come without an evaluation metric and form a training corpus, others like HumanEval [11], MBPP [6], ClassEval [13] or CoderEval [41] provide a challenge accompanied by an evaluation harness (i.E. a test-suite). There are already ongoing efforts in providing a meta-benchmark for these tasks [42][21], which commonly target LLMs directly, less agentic systems. For USEBench, we select repository-level tasks to (a) Distribution of SWE Steps - the most common path starts with reproduction followed by code retrieval, editing and review. (b) Distribution of SWT Steps - most common trajectory begins with test-case retrieval, test-editing and execution. # Figure 4: Comparison of SWT and SWE Step Distributions for USEagent on a significant subset of USEbench. Each bar shows how often different actions are invoked at a particular Meta-Agent step. provide challenges of a uniform granularity. We decided against coding exercises as many models slowly saturate on them [30]. Large Language Models and Agentic Systems. We consider a system agentic if the LLMs have a certain autonomy and are able to interact with the system through means of commands or other pre-defined tools. As such, Agentless [38] utilizes LLMs but is no agentic system due to its pre-defined workflow. Within agentic systems we see a divide into raw LLM-Agents that interface LLMs directly with a console and minimal tooling (e.g. ReAct [40], Gemini [14] or SWE-Agent [39]) and approaches that introduce more domain knowledge as well as additional techniques. These additions range from specialised tools (like reproduction and fault localisation in AutoCodeRover [43]) to self-reflective capabilities (like the reviewer-agent in SpecRover [29]). For this work, we have focused on AutoCodeRover [43] and OpenHands [36], as these are open source projects with big resonance in the community. While the AIDER project [3] has a well-suited scope (multiple SE tasks), its workflow relies on human prompting. We position AutoDev [32] similar to AIDER, as the primary interface is a CLI for human interaction. Agentic Systems. Passerine [28] by Google proposes a program repair framework oriented at SWEagent [39] incorporating a similar structure to our meta-agent, providing a total of 5 pre-configured commands (excluding free commandline). Our efforts differ by scope as we target more than program repair, and employ a larger set of agents to multiple tasks. Similarly, Microsoft’s MASAI [34], defines an action-space for their central agent similar to the meta-agent, limited to program repair. We differ from both works by providing more agents, structure the systems’ information and responses through a task-state, and apply the agentic system to more tasks than program repair. CodeR [10] has a meta-agent-component that pre-defines an execution plan, but does not adjust it dynamically. # 8 Threats to validity Construct Validity. Within USEbench, we approximate testing capabilities exploiting test-coverage. Test-coverage does not imply a semantic coverage of the code, and can be achieved by various hacks. In lack of a better oracle we have opted for test-coverage as it still captures many of the attributes we want to evaluate: The patches must be in a correct format, location and fit into the projects test-suite. Furthermore, upon inspection many of the REPOTEST datapoints contain branches that handle edge-case behavior with errors, for which a syntactic coverage equals semantic testing. Internal Validity. Data Leakage It is possible that the LLMs have been exposed to Code subject to evaluation or even datapoints from the source-benchmarks[30]. This is particularly likely as many publications [29, 38, 39] utilize the APIs by common providers that use user-data for training. Combating this is beyond our capabilities, but we aim to provide an overview of memorization by manually inspecting a sample of positives. # 9 Discussion Cost Analysis. While we consider that costs will play a diminishing role for agentic systems as foundation models and infrastructure advances, we still want summarize the accumulated costs over the experiments. USEagent costs were at an average of 3.65 USD, with notable differences between sub-benchmarks. Herein SWE scored lowest with 2.20 USD average, and REPOCOD highest at a mean of 7.66 USD. The high costs at REPOCOD originate from the iteration of patch-generation and test-execution. Within actions, EditCode was the dominant cost-driver, due to the (large) contexts provided through relevant code and tests. We calculate the Pearson correlation coefficient for the elapsed time and the induced costs, and find a strong correlation of $r = 0 . 8 9$ . This shows a strong correlation between computational (e.g. test execution) and model (e.g. reasoning) efforts. Perspectives. We consider the approach of USEagent in combination with the meta-benchmark USEbench to be a promising avenue for any task that revolves around code as its primary artifact. Many tasks such as dependency management or project configuration can be incrementally tackled by extending USEbench and introducing more actions into USEagent. For USEagent to become truly an AI Software Engineer, it must address tasks such as requirements engineering, data visualization, deployment, code review or even a-b-testing. The efforts presented in this work are the second step in agentic development towards an AI software engineer, the first being individual agentic systems for single tasks (like AutoCodeRover). With the USEagent we pave the way for a unified approach to interact autonomously with code regardless of the task, allowing us to extend our scope beyond its current limits. Going beyond the USEagent will involve studying the AI-AI and AI-human collaboration. Thus we will need to study the cooperative intelligence resulting from multiple USEagents and multiple human developers. This might establish the dynamics of future development teams. # References [1] [n. d.]. SWE-bench leaderboard. https://www.swebench.com/#verified. Accessed: 2025-03-08. [2] M Pradel A Roychoudhury, C Pasareanu and B Ray. 2025. AI Software Engineer: Programming with Trust. arxiv (Feb 2025). [3] AIDER Team. 2024. AIDER: Advanced AI-Powered Coding Assistant. https: //aider.chat. Accessed: 2025-01-03. [4] Anonymous. 2021. The Big Code Project: Learning to Program by Learning to Understand Programming Languages. arXiv preprint arXiv:2102.01092 (2021). [5] Antonis Antoniades, Albert Örwall, Kexun Zhang, Yuxi Xie, Anirudh Goyal, and William Wang. 2024. SWE-Search: Enhancing Software Agents with Monte Carlo Tree Search and Iterative Refinement. arXiv preprint arXiv:2410.20285 (2024). [6] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732 (2021). [7] Islem Bouzenia, Premkumar Devanbu, and Michael Pradel. 2024. Repairagent: An autonomous, llm-based agent for program repair. arXiv preprint arXiv:2403.17134 (2024). [8] Islem Bouzenia and Michael Pradel. 2024. You Name It, I Run It: An LLM Agent to Execute Tests of Arbitrary Projects. arXiv preprint arXiv:2412.10133 (2024). [9] Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and Azalia Mirhoseini. 2024. Large language monkeys: Scaling inference compute with repeated sampling. arXiv preprint arXiv:2407.21787 (2024). [10] Dong Chen, Shaoxin Lin, Muhan Zeng, Daoguang Zan, Jian-Gang Wang, Anton Cheshkov, Jun Sun, Hao Yu, Guoliang Dong, Artem Aliev, et al. 2024. CodeR: Issue Resolving with Multi-Agent and Task Graphs. arXiv preprint arXiv:2406.01304 (2024). [11] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating Large Language Models Trained on Code. arXiv:2107.03374 [cs.LG] [12] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Ryan Puri, Jared Nixon, Pamela Mishkin, Daniel Nguyen, Jordan Kaplan, and Ananthan Rajasekaran. 2021. Evaluating Large Language Models Trained on Code. arXiv preprint arXiv:2107.03374 (2021). [13] Xueying Du, Mingwei Liu, Kaixin Wang, Hanlin Wang, Junwei Liu, Yixuan Chen, Jiayi Feng, Chaofeng Sha, Xin Peng, and Yiling Lou. 2023. Classeval: A manuallycrafted benchmark for evaluating llms on class-level code generation. arXiv preprint arXiv:2308.01861 (2023). [14] Google. 2024. Google Gemini AI Update: December 2024. Blog post. https://blog.google/technology/google-deepmind/google-geminiai-update-december-2024/#building-responsibly Available online: https://blog.google/technology/google-deepmind/google-gemini-ai-updatedecember-2024/#building-responsibly. [15] Claire Le Goues, Michael Pradel, and Abhik Roychoudhury. 2019. Automated Program Repair. Commun. ACM 62 (2019). Issue 12. [16] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 (2025). [17] Shanshan Han, Qifan Zhang, Yuhang Yao, Weizhao Jin, Zhaozhuo Xu, and Chaoyang He. 2024. LLM multi-agent systems: Challenges and open problems. arXiv preprint arXiv:2402.03578 (2024). [18] Carlos E Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik R Narasimhan. 2024. SWE-bench: Can Language Models Resolve Real-world Github Issues?. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=VTF8yNQM66 [19] René Just, Darioush Jalali, and Michael D Ernst. 2014. Defects4J: A database of existing faults to enable controlled testing studies for Java programs. In Proceedings of the 2014 international symposium on software testing and analysis. 437–440. [20] Shanchao Liang, Yiran Hu, Nan Jiang, and Lin Tan. 2024. Can Language Models Replace Programmers? REPOCOD Says’ Not Yet’. arXiv preprint arXiv:2410.21647 (2024). [21] Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. 2024. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation. Advances in Neural Information Processing Systems 36 (2024). [22] Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, et al. 2021. Codexglue: A machine learning benchmark dataset for code understanding and generation. arXiv preprint arXiv:2102.04664 (2021). [23] Niels Mündler, Mark Niklas Mueller, Jingxuan He, and Martin Vechev. 2024. SWT-Bench: Testing and Validating Real-World Bug-Fixes with Code Agents. In The Thirty-eighth Annual Conference on Neural Information Processing Systems. https://openreview.net/forum?id=9Y8zUO11EQ [24] OpenAI. 2024. Introducing SWE Bench: Verified. https://openai.com/index/ introducing-swe-bench-verified/. Accessed: 2025-01-03. [25] OpenAI. 2024. OpenAI o1: Large Language Model. https://openai.com/index/ introducing-openai-o1-preview/ Accessed: 2025-02-26. [26] Andrew Orwall and contributors. 2025. Moatless Tools. https://github.com/ aorwall/moatless-tools/tree/main. Accessed: 2025-02-04. [27] Ruchir Puri, Geert Jansen Saini, Shruti Kalyan, Naveen Limaye, Saurabh Kolar, Philipe Maldonado, Jianfeng Hu, Dinesh Sreedhar, Yunhui Zhang, Lingming Zhang, et al. 2021. CodeNet: A Large-Scale AI for Code Dataset for Learning a Diversity of Coding Tasks. In 2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). IEEE, 15–18. [28] Pat Rondon, Renyao Wei, José Cambronero, Jürgen Cito, Aaron Sun, Siddhant Sanyam, Michele Tufano, and Satish Chandra. 2025. Evaluating Agent-based Program Repair at Google. arXiv preprint arXiv:2501.07531 (2025). [29] Haifeng Ruan, Yuntong Zhang, and Abhik Roychoudhury. 2024. Specrover: Code intent extraction via llms. arXiv preprint arXiv:2408.02232 (2024). [30] June Sallou, Thomas Durieux, and Annibale Panichella. 2024. Breaking the silence: the threats of using llms in software engineering. In Proceedings of the 2024 ACM/IEEE 44th International Conference on Software Engineering: New Ideas and Emerging Results. 102–106. [31] Davide Spadini, Fabio Palomba, Andy Zaidman, Magiel Bruntink, and Alberto Bacchelli. 2018. On the relation of test smells to software code quality. In 2018 IEEE international conference on software maintenance and evolution (ICSME). IEEE, 1–12. [32] Michele Tufano, Anisha Agarwal, Jinu Jang, Roshanak Zilouchian Moghaddam, and Neel Sundaresan. 2024. AutoDev: Automated AI-Driven Development. arXiv preprint arXiv:2403.08299 (2024). [33] Arie Van Deursen, Leon Moonen, Alex Van Den Bergh, and Gerard Kok. 2001. Refactoring test code. In Proceedings of the 2nd international conference on extreme programming and flexible processes in software engineering (XP2001). Citeseer, 92–95. [34] Nalin Wadhwa, Atharv Sonwane, Daman Arora, Abhav Mehrotra, Saiteja Utpala, Ramakrishna B Bairi, Aditya Kanade, and Nagarajan Natarajan. 2024. MASAI: Modular Architecture for Software-engineering AI Agents. In NeurIPS 2024 Workshop on Open-World Agents. https://openreview.net/forum?id=NSINt8lLYB [35] Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, and Heng Ji. 2024. Executable Code Actions Elicit Better LLM Agents. In ICML. arXiv:2402.01030 [36] Xingyao Wang, Boxuan Li, Yufan Song, Frank F Xu, Xiangru Tang, Mingchen Zhuge, Jiayi Pan, Yueqi Song, Bowen Li, Jaskirat Singh, et al. 2024. Openhands: An open platform for ai software developers as generalist agents. arXiv preprint arXiv:2407.16741 (2024). [37] W Eric Wong, Ruizhi Gao, Yihao Li, Rui Abreu, and Franz Wotawa. 2016. A survey on software fault localization. IEEE Transactions on Software Engineering 42, 8 (2016), 707–740. [38] Chunqiu Steven Xia, Yinlin Deng, Soren Dunn, and Lingming Zhang. 2024. Agentless: Demystifying llm-based software engineering agents. arXiv preprint arXiv:2407.01489 (2024). [39] John Yang, Carlos E Jimenez, Alexander Wettig, Kilian Lieret, Shunyu Yao, Karthik Narasimhan, and Ofir Press. 2024. Swe-agent: Agent-computer interfaces enable automated software engineering. arXiv preprint arXiv:2405.15793 (2024). [40] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2023. ReAct: Synergizing Reasoning and Acting in Language Models. In International Conference on Learning Representations (ICLR). [41] Hao Yu, Bo Shen, Dezhi Ran, Jiaxin Zhang, Qi Zhang, Yuchi Ma, Guangtai Liang, Ying Li, Qianxiang Wang, and Tao Xie. 2024. Codereval: A benchmark of pragmatic code generation with generative pre-trained models. In Proceedings of the 46th IEEE/ACM International Conference on Software Engineering. 1–12. [42] Zhaojian Yu, Yilun Zhao, Arman Cohan, and Xiao-Ping Zhang. 2024. HumanEval Pro and MBPP Pro: Evaluating Large Language Models on Self-invoking Code Generation. arXiv preprint arXiv:2412.21199 (2024). [43] Yuntong Zhang, Haifeng Ruan, Zhiyu Fan, and Abhik Roychoudhury. 2024. Autocoderover: Autonomous program improvement. In Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis. 1592–1604. [44] Zeyu Zhang, Xiaohe Bo, Chen Ma, Rui Li, Xu Chen, Quanyu Dai, Jieming Zhu, Zhenhua Dong, and Ji-Rong Wen. 2024. A survey on the memory mechanism of large language model based agents. arXiv preprint arXiv:2404.13501 (2024).
The growth of Large Language Model (LLM) technology has raised expectations for automated coding. However, software engineering is more than coding and is concerned with activities including maintenance and evolution of a project. In this context, the concept of LLM agents has gained traction, which utilize LLMs as reasoning engines to invoke external tools autonomously. But is an LLM agent the same as an AI software engineer? In this paper, we seek to understand this question by developing a Unified Software Engineering agent or USEagent. Unlike existing work which builds specialized agents for specific software tasks such as testing, debugging, and repair, our goal is to build a unified agent which can orchestrate and handle multiple capabilities. This gives the agent the promise of handling complex scenarios in software development such as fixing an incomplete patch, adding new features, or taking over code written by others. We envision USEagent as the first draft of a future AI Software Engineer which can be a team member in future software development teams involving both AI and humans. To evaluate the efficacy of USEagent, we build a Unified Software Engineering bench (USEbench) comprising of myriad tasks such as coding, testing, and patching. USEbench is a judicious mixture of tasks from existing benchmarks such as SWE-bench, SWT-bench, and REPOCOD. In an evaluation on USEbench consisting of 1,271 repository-level software engineering tasks, USEagent shows improved efficacy compared to existing general agents such as OpenHands CodeActAgent. There exist gaps in the capabilities of USEagent for certain coding tasks, which provides hints on further developing the AI Software Engineer of the future.
[ "cs.SE", "cs.AI" ]
# 1 Introduction With the technological advancements of the last couple of decades, machine learning (ML) and artificial intelligence (AI) play an important part in automated decisionmaking pipelines [1–3]. Even though these tools are generally created by optimising with respect to their accuracy and performance, there are other important aspects that should be considered, such as their fairness, robustness, and privacy [4]. One of these aspects, fairness, becomes even more crucial when AI-based tools are used for decision-making tasks such as checking whether accepting a credit application is profitable and risk-free, if an applicant is worthy of a job position, or if a defendant has a higher risk of committing a crime again. Such processes, by their nature, are prone to bias, meaning that senstivite attributes in the data such as gender, race, age, employment status, etc. of a person may implicitly or explicitly affect the judgement of the model [5]. Since such decisions have a major impact on individuals’ lives, questions about the fairness and integrity of their decisions yield themselves to the research area of Fair AI or AI fairness [3]. Fairness is a complex concept with different definitions for different disciplines. Within the context of ML, the concept of fairness becomes applicable to decisionmaking algorithms. For decision-making ML applications, fairness is the moral lens to impose upon how the data is processed, learned from, and applied to the task at hand [6]. In the context of AI fairness, most work focuses on binary classification tasks [7]. One common method to examine the fairness of a binary classification task is to understand privileged and unprivileged groups, as well as favourable and unfavourable labels. As an example, in a fraud detection problem, fraud is the unfavourable label, whereas non-fraud is a favourable label. In this example, we can consider age as a sensitive attribute where young constitute the privileged group and old constitute the unprivileged group. In practice – to describe the relevant cases of unfairness of classifiers – a group is considered unprivileged if its members are more likely to be predicted as unfavourable. AI fairness literature includes detection and mitigation methods for unfairness and bias, focusing on both data-driven and model-driven approaches [2, 8–11]. For bias detection, literature has proposed metrics that aim to measure the fairness of a model or a dataset [1, 12, 13]. For mitigation, there are proposed algorithms to be utilised during pre-processing, in-processing, or post-processing [14–16]. Also, there are readyto-use toolkits [17, 18] that aim to make these methods accessible to researchers and data scientists. An important challenge for ML and AI applications is dealing with data that is imbalanced in nature [19]. There exist studies focusing on creating a more balanced dataset by different sampling techniques [19–21], with synthetic data generation [22], or directly balancing the internal data representation [23]. The literature is also rich with comparative experiments and imbalanced datasets from different domains [24, 25]. Since fairness and imbalanced data are important challenges in ML on their own, considering them together creates more complex challenges. By its nature, in the context of fairness, a dataset may be imbalanced both according to the label distribution and the privilege group distribution, which we denote as double imbalance. In the literature, there exist studies on the intersection of fairness and imbalanced data [26–28]. However, few methods address this double imbalance problem effectively. Additionally, most proposed approaches are either task-specific or model-dependent. In this study, we focus on debiasing doubly imbalanced datasets and propose a considerably general solution that aims to optimize both fairness and classification performance. Firstly, we conduct an exploratory study to analyze the performance of a fairness method for a doubly imbalanced dataset. We investigate the effect of balancing the dataset in terms of only privilege group and only favourable label (singly balanced) or both (doubly balanced). Motivated by the observation that existing mitigation solutions have limitations for doubly imbalanced data, we propose a solution that finds the optimal balancing for sampling the data in terms of both privileged/unprivileged group and favourable/unfavourable labels. The proposed solution considers a doubly imbalanced dataset composed of four partitions (privileged and favourable, privileged and unfavourable, unprivileged and favourable, unprivileged and unfavourable). It aims to find the optimal ratios for these groups that would improve the fairness of classifiers, with the least possible compromise of the detection performance – a multi-criteria optimization problem. One decision task where both the fairness and the precision of the model are crucial is fraud detection, a binary classification problem that aims to identify fraud instances in a dataset. Both the label and privilege group imbalance can be observed for this task. Naturally, publicly available datasets show fraud label occurrences such as $0 . 1 7 2 \%$ [29], $1 . 1 0 \%$ [30], and 5.96% [25]. Considering fairness for these datasets poses a challenge in being fair and accurate at the same time. With these aspects, in this study, fraud detection is used as an application case and a doubly imbalanced fraud dataset is used in the analysis. The main contributions of the study are as follows: • We propose the concept of doubly imbalanced dataset and with an exploratory analysis we show the limitation of existing debiasing approaches for doubly imbalanced datasets. • We propose a multi-criteria optimisation based debiasing solution for doubly imbalanced datasets, which aims to consider both fairness and classification accuracy at the same time. The proposed solution includes the phases of sampling according to given data balance parameters and applying grid search to find the optimal ratios for imbalanced label and privilege feature, in order to balance fairness and accuracy. The obtained balance points are presented as a Pareto Front such that the balance between fairness and classification accuracy can be explored and the best fitting one can be picked. The proposed solution is classification model agnostic and can be applied on singly balanced dataset cases as well. • The performance of the proposed solution is analysed on three benchmark datasets with five classification models. The experiments show the usability of the proposed debiasing method for datasets with different imbalance ratios. This paper is structured as follows: We present a summary of related studies from the AI fairness literature in Section 2. Section 3 lays out the terminology and primitives that are used in this work. We present the exploratory analysis on debiasing performance for imbalanced cases in Section 4. Section 5 describes the proposed solution for determining the optimal balance structure in the data sample for debiasing and classification performance. Our experiment results and discussion are given in Section 6, and the paper is concluded with an overview in Section 7. # 2 Related Work The literature on social studies related with AI fairness includes studies on unfair and biased AI models and their effects. In [5], Lambrecht and Tucker analyse the data of a job recommender system which was biased to present a lower number of STEM ads to women than to men. In addition to case studies, the literature is also relatively rich in metrics and mitigation algorithms. In the AI literature, one of the earliest works is the one by Calders et al., where they propose a method to build classifier models with independency constraints to reduce sampling bias [14]. In another work, Kamiran and Calders define a “massaging” process for the data to reduce bias in a dataset [10]. Feldman et al. discuss the metric Disparate Impact (DI) ratio (explained in detail in Section 3) and propose a DI removal method that modifies the dataset to improve the DI ratio [1]. Iurada et al. [13] focus on the cross-domain (CD) learning problem, proposing a new fairness metric for the task of image classification. The authors present fairness benchmarks for 14 different CD learning models with five different datasets. More recently, studies propose fair training algorithms and representation methods, focusing on privilege definitions. Kearns et al. define a fair learning algorithm that utilises group fairness [31, 32]. Iosifitis and Ntoutsi propose AdaFair, a fairness-aware variant of the AdaBoost classifier [15]. Celis et al. propose a meta-algorithm for fairness that utilises group-based fairness metrics [33]. There are also toolkits and datasets that can be used for improving the fairness of ML and AI pipelines. AIF360 [18] and Aequitas [17] are two of the most commonly used open-source toolkits. On the dataset side, Jesus et al. constructed and published the open-source BAF dataset suite [30], which consists of 6 different variants, each having a different bias induced in a controlled manner, to be used in fairness research. The dataset is generated from an already existing biased dataset; however, the original dataset is not disclosed publicly due to privacy concerns [34]. On the other hand, exploring the challenge of working with imbalanced data for ML and AI tasks, the literature is rich in studies in different domains. Several studies propose resampling techniques such as oversampling or undersampling to make the imbalanced data balanced, some of them utilising synthetic data generation approaches. Chawla et al. have proposed SMOTE [19], covering the oversampling approach with synthetic data generation as early as 2002, and providing a basis for other approaches in the following years [20, 21, 35]. Other studies also use synthetic data generation. Yilmaz et al. [22] study an ‘intrusion detection’ problem, which is similar to fraud detection in terms of an imbalanced label distribution. They use a generative adversarial network (GAN) structure to generate a larger and more balanced version of an existing dataset. Marrakchi et al. [23] propose a feature embedding technique which employs contrastive learning to balance class labels such that the representation of the data is modified instead of changing the dataset. Approaching the problem with a meta-learning solution, Moniz et al. propose ATOMIC [36], which anticipates the performance of a set of solutions first to reduce complexity and costs. There are also studies from different domains aiming to explore the performance of existing methods. Khushi et al. [24] compare several resampling approaches on an imbalanced dataset of medical origin. On the fraud detection domain, Makki et al. [25] experiment with a wide set of balancing methods, comparing their performance on a highly imbalanced credit card fraud dataset to compare their performance. Although the literature is fairly rich in terms of fairness studies and method proposals for data imabalance, there are only a few studies on the intersection of these two problems. Lavalle et al. study the effect of rebalancing in terms of creating bias by proposing a novel and automated data visualisation method [27]. Approaching the topic from a different perspective, Nagpal et al. focus on the problem of gender recognition using images. They propose a loss function that aims to minimise the effect of bias in an imbalanced dataset with respect to ethnicity [26]. Focusing on the domain of education, Sha et al. explore the tradeoff between accuracy and demographic bias of a model, obtaining a small sacrifice in accuracy in a less biased model, trained on an artificially balanced dataset instead of the original imbalanced dataset [28]. # 3 Preliminaries This section presents basic concepts and metrics related to fairness in ML and AI. Favourable and Unfavourable Labels. Favourable label is one of the labels assigned within the classification task that is positive in a social context. For instance, being hired is the favourable label for an ML tool for deciding the outcome of job applications, whereas being rejected is the unfavourable label. Privileged and Unprivileged Groups. Privileged group is defined as a set of data instances within a sensitive attribute that the binary classifier favours; e.g., if a ML model for hiring favours male applicants over female applicants, male and female applicants are called privileged and unprivileged groups, respectively. Disparate Impact Ratio. Disparate impact (DI) ratio is a fairness metric derived from the concept of Disparate Impact (DI), which is defined as systematic favouritism done to a certain group1. DI ratio is defined as the ratio of the probability of being labeled as unfavourable for the unprivileged and privileged class, and can be calculated as in Equation 1. $$ \frac { P ( \mathrm { L = u n f a v o u r a b l e \mid G = u n p r i v i l e g e d } ) } { P ( \mathrm { L = u n f a v o u r a b l e \mid G = p r i v i l e g e d } ) } $$ In the equation, $G$ denotes group, $L$ denotes label, and $P ( X | Y )$ stands for the probability that $X$ is true given $Y$ is true. Since a fair model is expected to behave similarly for both privilege groups, the optimal value for DI ratio is 1, and values between $0 . 8 - 1 . 2$ are considered acceptable [1]. Matthews Correlation Coefficient. Matthews Correlation Coefficient (MCC) is a classification score that is a special case of the Phi coefficient in statistics [37]. Common performance metrics used in binary classification, such as F1 score and precision, represent the performance of a classifier by focusing more on true classifications. However, when a classification task has imbalanced labels, models that can only identify one class correctly might have a high number of true classifications. This results in high accuracy and F1 score, which do not represent the model’s performance in identifying the class with lower frequency. For such cases, MCC is reported to be a better alternative due to its formula distributing the focus to all types of true and mis-classifications. [37, 38]. MCC can have values in the range $[ - 1 , 1 ]$ , where $^ { 1 }$ is the best possible value and $- 1$ is the worst possible value. It is calculated as given in Equation 2. $$ \frac { T P * T N - F P * F N } { \sqrt { ( T P + F P ) * ( T P + F N ) * ( T N + F P ) * ( T N + F N ) } } $$ In the equation, $T P$ is the number of true positive instances, $T N$ is the number of true negative instances, $F P$ is the number of false positive instances, and $F N$ is the number of false negative classifications. # 4 Exploratory Analysis In this section, we present an exploratory analysis conducted on the BAF fraud dataset [30] to identify limitations of previous fairness solutions for doubly imbalanced cases. This motivates our solution for balancing which we present in Section 5. # 4.1 Dataset This work uses the BAF dataset suite, which is a collection of bank fraud data [30]. Since using real-life bank fraud data would not be feasible due to privacy and security concerns, the authors of the dataset generated it through training a Conditional GAN (CTGAN) model with added noise, using real-life data. The data is presented in six different variants as a dataset suite, which has different samplings that skew the imbalance of the original data in different ways. In this study, the base variant of the dataset suite is used. As privileged and unprivileged groups, we consider the previously explored [34] old and young groups split by the customer age attribute being larger than or equal to 50. # 4.2 Preliminary Experiments To get a sense of the performance of previous ML fairness approaches on datasets with imbalanced labels, a sampling and debiasing pipeline is created using Learning Fair Representations (LFR) [39] as the fairness method, DI ratio as the fairness metric, and MCC as the basic classification accuracy metric. Additionally, Precision, Recall, F1 score (for the unfavourable label) and Accuracy metrics are presented to provide a clearer picture of the classification performance. Table 1: Exploratory analysis: number of instances for each group in the training set. LFR is a bias mitigation method that aims to learn a fairer representation of the dataset. The method is also combined with a set of classical supervised learning methods so that it can further generate classification results [39]. In this study, we prefer to use LFR, since it has a reliable implementation available within AIF360. In the experiments, using the base variant of the BAF data suite with one million rows, training and test partitions are constructed that constitute $9 0 \%$ and $1 0 \%$ of the dataset, respectively. Table 1 shows the number of instances that belong to each group in the training set. It should be noted that, due to the nature of the fraud detection task, there is an extremely small number of unprivileged instances with the fraud label. Although it is not as drastic as the label imbalance, privilege groups also have an imbalanced distribution. This double imbalance situation potentially incurs a limitation for the fairness methods. We design our exploratory analysis in order to answer the following research questions: • RQ1. Can debiasing techniques such as LFR be successfully applied to data with imbalanced labels? • RQ2. Can debiasing techniques such as LFR be successfully applied to data with imbalanced privilege groups? • RQ3. Can debiasing techniques such as LFR be successfully applied to data with double imbalance? In order to answer these research questions, an experiment is designed to measure the effect of label imbalance on the performance of LFR. To this aim, four experiment setups with different sampled distributions for unfavourable labels and privilege groups are constructed. The distributions of the four setups are given in Table 2, where the instances are sampled from the training partition, but the test partition is kept as is for all of them. The test partition thus has the same distribution for the class labels and the privileged groups as in the original dataset. The detailed explanations of the setups are as follows: Table 2: Different sampling setups for the exploratory analysis. Table 3: The results of the exploratory analysis for ML algorithms without debiasing and sampling. Precision, recall and F1 results are for the Fraud label. • Double-balanced: All four combinations of group/label options have the same number of instances, which is set by the number of unprivileged (old) unfavourable (fraud) instances. Unfavourable (fraud)-balanced: The fraud instances within both privilege groups are sampled to be $5 0 \%$ , respectively, with the percentages of the privilege groups in the result kept the same as in the original distribution. • Privilege-balanced: While the percentage of fraud instances is kept as in the original distribution, the percentages of the privilege groups are equalised within the fraud and non-fraud partitions, respectively. • Double-imbalanced (original distribution): Percentages for each group/label combination in the training set are kept as in the original dataset. As an initial analysis, the performance of basic classifiers trained by using the original training partition is reported in Table 3. In this analysis, we used Logistics Regression (LR), Support Vector Machine (SVM), Random Forest (RF) and Naive Bayes (NB) classifiers. As given in the table, DI ratio values are high denoting unfair classification for LR, RF and NB. For SVM, a $N a N$ value indicates divide-by-zero, which occurs when the classifier does not predict the unfavourable label. The classification performance of the classifiers is also very limited in terms of MCC. For SVM, the MCC value is $N a N$ due to the same reason as for the DI ratio value. In this analysis, the accuracy performance is superficially high, since the classifiers can detect the favourable label (non-fraud) very well due to the imbalanced nature of the class labels. Table 4: Results of basic classifiers and LFR for 4 sampling strategies on BAF dataset However, the poor performance of the classifiers in learning the non-favourable label (fraud) leads to low values for Precision, Recall and F1 score metrics. In order to analyze the fairness and classification performance under the four sampling setups, we run LFR as well as the same classifiers as used above (LR, SVM, RF and NB without any additonal debiasing method applied) on the sampled datasets obtained. LFR is a stochastic representation learning approach. The sampling also brings additional stochastic effect into the analysis. For this reason, we run LFR ten times for each setup, and the average results over those runs are reported. The results are given in Table 4. In the Privilege-balanced setup, the balancing is applied for the privilege groups, while the label distribution is kept as in the original dataset. Therefore this setup has a collection of imbalanced labels. In this setup, LFR fails since the DI ratio cannot be generated (expressed as $N a N$ ). Such a situation arises when the denominator of the DI ratio formula is highly likely to become zero since the model fails to detect any data instance as fraud / unfavourable. With respect to $R Q 1$ , the result on this setup indicates that the considered debiasing method (LFR) cannot be successfully applied when labels are strongly imbalanced in the data collection. In this setup, it is seen that the ML models also fail to have debiased and accurate results. All models except NB fail to operate with the percentages of the original dataset and privilegebalanced setup, with NB providing scores that are much below the acceptable fairness and classification performance values. In the Unfavourable (fraud) - balanced setup, the sample is balanced for the class label, but the privilege group ratios stay the same as in the original data. Hence, this setup has imbalanced privilege groups. The result answers RQ2 such that the LFR method seems usable under imbalance for privileged/unprivileged groups. In this setup, ML models also have positive fairness scores. In the Double-imbalanced setup, when LFR is applied, again it is seen that DI ratio has a $N a N$ value. Therefore, with respect to RQ3, the result on this setup suggests that the considered debiasing method (LFR) cannot be successfully applied on doubly imbalanced data collections. In this setup, other models also fail. Only NB could be considered usable on this dataset; however, the DI Ratio indicates a significant bias in the model. Finally, we see that the combination of both sampling approaches in the Doublebalanced setup yields an almost perfect DI ratio score, meaning that the LFR method works best when the data is balanced both according to the privileged groups and fraud labels. ML models also perform satisfactorily under double-balanced setup. In the first two setups, due to data balancing for the favourable label, recall values improve, and there is also a slight improvement in MCC values. However, precision values still remain very low. The balancing also affects the accuracy values and there is a decrease to about $0 . 7 0 \mathrm { ~ - ~ } 0 . 8 0$ from the superficially high accuracy values reported before in Table 3, and also in the Double-imbalanced results given in Table 4. Our initial exploration shows that the considered fairness method fails to perform and generate acceptable classifiers for imbalanced datasets, particularly for label imbalance and double imbalance cases. It is seen that just sampling the data with respect to a given degree of balance to train a classifier can help to improve the fairness of the models. However, a drastic sampling ratio such as $5 0 \%$ heavily affects the classification performance of our models, as seen in Table 4. This observation leads to a follow-up research question: • RQ4. Can we find an optimal degree of balancing for data sampling in order to train supervised learning models such that the fairness is improved with the least possible decrease in classification accuracy? This research question motivated us to propose a sampling approach that aims to find the optimal balance in the training dataset with respect to the trade-off between fairness and classification performance, particularly for doubly imbalanced datasets. # 5 The Proposed Dataset-Balancing Approach In the exploratory analysis, we observed that the way we structure the balance in the training dataset affects the classification fairness and accuracy. The next step can be considered as the search for an optimal balance structure for the dataset that will provide the best possible classification fairness and accuracy. To this aim, we model this problem as a multi-criteria optimization problem and propose a search-based solution seeking the optimal balance for favourable vs. unfavourable labels and privileged vs. unprivileged groups in the data. Therefore, the proposed solution is model-agnostic, and it can be used together with any classification algorithm, as well as with other debiasing methods. The proposed method has two basic components: • sampling according to a given balance structure, and $\bullet$ grid search to find the optimal balance structure. In the rest of this section, these components are described in more detail. # 5.1 Sampling According to a given Balance Structure In the proposed method, given the original data collection D, the goal is to construct a sampled dataset $D ^ { \prime }$ which has a certain target balance structure. The sampled collection $D ^ { \prime }$ is composed of the following four partitions: • $p$ $f ^ { \prime }$ : privileged favourable samples $\bullet$ $p _ { - } u f ^ { \prime }$ : privileged unfavourable samples $\bullet$ up $f ^ { \prime }$ : unprivileged favourable samples $\bullet$ $u p \mathbf { \mathcal { - } } u f ^ { \prime }$ : unprivileged unfavorable samples In this notation, $p ^ { \prime }$ denotes the set of all privileged users, both with favorable and unfavorable labels, and $u p ^ { \prime }$ denotes the set of all unprivileged users. Similarly, $f ^ { \prime }$ denotes the set of all favorably labeled users, both in privileged and unprivileged groups, and $u f ^ { \prime }$ denotes the set of all unfavorably labeled users. The counterparts of these variables without ’ denote the corresponding sets in $D$ . There are basic principles to consider when sampling from the original data. Machine learning methods that are traditionally developed to maximize accuracy, typically skew their predictions towards the majority class. We hypothesize that $D ^ { \prime }$ samplings that are of higher imbalance than $D$ , will not benefit our optimization efforts. Thus, we do not want the make the imbalance in either of the favorability and privilege axes to be higher in our sampling $D ^ { \prime }$ than that of the original data D. Another restriction we want to incorporate in our method is that we do not want the majority class in D to be the minority in the D’ in either of the favorability and privilege axes. Additionally, the rate of the favourable labels in the privileged and unprivileged groups should not be modified such that the rate of favourable instances in the unprivileged group exceeds the rate within the privileged group. In other words, the privileged/unprivileged groups should not interchange the roles in terms of bias. These restrictions are listed as follows: 1. The majority privilege group’s ratio in $D ^ { \prime }$ cannot be higher than that of $D$ . 2. The majority privilege group in $D$ cannot be the minority group in $D ^ { \prime }$ 3. The majority favourability label’s ratio in $D ^ { \prime }$ cannot be higher than $D$ . This also means that the minority favorability label’s ratio cannot be less than in $D$ . 4. The majority favorability group in $D$ cannot be the minority group in $D ^ { \prime }$ 5. The privileged group’s advantage over the unprivileged group in getting favourable label shall not be more in $D ^ { \prime }$ than $D$ . 6. The unprivileged group cannot be more likely to be assigned to the favorable label than the priviledged group, in other words, the privileged/unprivileged groups should not interchange the roles in terms of bias. The formal descriptions of these restrictions are as given in Equation 3. $$ \begin{array} { r l } & { \mathrm { R e s t r i c t i o n s ~ 1 ~ a n d ~ 2 : } \ \frac { | p ^ { \prime } | } { | D ^ { \prime } | } \in R ( \frac { | p | } { | D | } , 0 . 5 ) } \\ & { \mathrm { R e s t r i c t i o n s ~ 3 ~ a n d ~ 4 : } \ \frac { | f ^ { \prime } | } { | D ^ { \prime } | } \in R ( \frac { | f | } { | D | } , 0 . 5 ) } \\ & { \mathrm { R e s t r i c t i o n s ~ 5 ~ a n d ~ 6 : } \ \frac { \frac { | p - f ^ { \prime } | } { | p ^ { \prime } | } } { \frac { | u p - f ^ { \prime } | } { | u p ^ { \prime } | } } \in R ( \frac { \frac { | p - f | } { | p | } } { \frac { | u p - f | } { | u p | } } , 1 ) } \end{array} $$ Here, $R ( x , y )$ is the closed interval between $_ \textrm { x }$ and y, as given in Equation 4. $$ R ( x , y ) = { \left\{ \begin{array} { l l } { [ x , y ] } & { x \leq y } \\ { [ y , x ] } & { o t h e r w i s e } \end{array} \right. } $$ The sampling to construct $D ^ { \prime }$ under the above restrictions, fulfilling a desired balance structure (balance ratio), is performed using the following three parameters, each having values in [0,1]: • Parameter $\alpha$ : It controls the unprivileged group rate within $D ^ { \prime }$ . When it is set to $0$ , $D ^ { \prime }$ has the same rate of unprivileged group instances as in $D$ . The value 1 constrains the unprivileged group rate to be 0.5, i.e. half of the instances in the sensitive attribute. For any value in (0,1) this rate can be computed from the linear interpolation of these two end points. • Parameter $\beta$ : It controls the unfavourable labeled instance rate within $D ^ { \prime }$ . When it is set to $0$ , $D ^ { \prime }$ has the same rate of unfavourable labels as in $D$ , whereas the value 1 makes this rate equal to 0.5. As in $\alpha$ , for any value in (0,1) this rate can be computed from the linear interpolation of these two end points. • Parameter $\gamma$ : It controls the ratio of the privileged group compared to the unprivileged group in getting assigned the favourable label. With the value $0$ , $D ^ { \prime }$ has the same ratio as in $D$ , whereas the value 1 makes the ratio equal to 1, giving equal favourability rates to the privileged and unprivileged groups. As in the other two parameters, for any value in [0,1], this rate can be computed from the linear interpolation of these two endpoints. Given these three parameters, the size of the partitions for $p _ { - } f ^ { \prime }$ , $p _ { - } u f ^ { \prime }$ , $u p _ { - } f ^ { \prime }$ and up uf ′ can be unambiguously determined. The partition ratios are numerically obtained from the constraints in Equation 5. $$ \begin{array} { r l } & { \mathrm { 1 . ~ } \frac { | p ^ { \prime } | } { | D ^ { \prime } | } = \frac { | p | } { | D | } ( 1 - \alpha ) + 0 . 5 * \alpha } \\ & { \mathrm { 2 . ~ } \frac { | f ^ { \prime } | } { | D ^ { \prime } | } = \frac { | f | } { | D | } ( 1 - \beta ) + 0 . 5 * \beta } \\ & { \mathrm { 3 . ~ } \frac { \frac { | p - f ^ { \prime } | } { | p ^ { \prime } | } } { \frac { | a p - f ^ { \prime } | } { | a p ^ { \prime } | } } = \frac { \frac { | p - f | } { | p | } } { \frac { | a p - f | } { | a p | } } * ( 1 - \gamma ) + \gamma } \end{array} $$ This formulation has the advantage of being interpretable such that $\alpha$ can be interpreted as the balancing amount in the privileged/unprivileged axis, $\beta$ as the balancing amount in the favourable/unfavourable axis, and $\gamma$ is interpretable as a factor that governs the balance of the favourability rates of the different privilege classes. In order to prevent dataset size to be a factor affecting the sampling results, once the computation of the sampling ratios for each $( \alpha , \beta , \gamma ) \in \{ 0 , 0 . 0 1 , 0 . 0 2 , . . . , 1 \} ^ { 3 }$ is completed, we find the maximum size of $D ^ { \prime }$ , such that for each $\alpha , \beta , \gamma$ combination the computed sampling ratios are satisfiable given the instance counts $( p _ { - } f , \ p _ { - } u f$ , $u p _ { - } f , u p _ { - } u f )$ in $D$ . The calculation of the sampling ratios $\begin{array} { r } { \big ( \frac { | p _ { - } f ^ { \prime } | } { | D ^ { \prime } | } , \ \frac { | p _ { - } u f ^ { \prime } | } { | D ^ { \prime } | } , \ \frac { | u p _ { - } f ^ { \prime } | } { | D ^ { \prime } | } , \frac { | u p _ { - } u f ^ { \prime } | } { | D ^ { \prime } | } \big ) } \end{array}$ from the aforementioned constraints is given in Equation 6. $$ { \begin{array} { l } { P ^ { \prime } = { \frac { \displaystyle | p ^ { \prime } | } { \displaystyle | D ^ { \prime } | } } = { \frac { \displaystyle | p | } { \displaystyle | D | } } ( 1 - \alpha ) + 0 . 5 * \alpha } \\ { F ^ { \prime } = { \frac { \displaystyle | f ^ { \prime } | } { \displaystyle | D ^ { \prime } | } } = { \frac { \displaystyle | f | } { \displaystyle | D | } } ( 1 - \beta ) + 0 . 5 * \beta } \\ { A ^ { \prime } = { \frac { \displaystyle | p _ { + } f ^ { \prime } | } { \displaystyle { \frac { | \mu p _ { - } f ^ { \prime } | } { \displaystyle | a p _ { - } f ^ { \prime } | } } } } = { \frac { \displaystyle { \frac { | p _ { - } f | } { \displaystyle | p | } } } { \displaystyle { \frac { | a p _ { - } f | } { \displaystyle | a p | } } } } * ( 1 - \gamma ) + \gamma } \end{array} } $$ Given that $\begin{array} { r } { F _ { p } ^ { \prime } = \frac { | p - f ^ { \prime } | } { | p ^ { \prime } | } } \end{array}$ and F ′ $\begin{array} { r } { F _ { u p } ^ { \prime } = \frac { | u p - f ^ { \prime } | } { | u p ^ { \prime } | } } \end{array}$ Equation 7 shows how to compute $F _ { u p } ^ { \prime }$ $$ \begin{array} { l } { { F _ { p } ^ { \prime } = A ^ { \prime } * F _ { u p } ^ { \prime } } } \\ { { F ^ { \prime } = P ^ { \prime } * F _ { p } ^ { \prime } + ( 1 - P ^ { \prime } ) * F _ { u p } ^ { \prime } } } \\ { { \ } } \\ { { \ } } \\ { { = P ^ { \prime } * ( A ^ { \prime } * F _ { u p } ^ { \prime } ) + ( 1 - P ^ { \prime } ) * F _ { u p } ^ { \prime } } } \\ { { \ } } \\ { { \ } } \\ { { F _ { u p } ^ { \prime } * ( 1 + A ^ { \prime } P ^ { \prime } - P ^ { \prime } ) , t h e n } } \\ { { \ } } \\ { { F _ { u p } ^ { \prime } = \displaystyle \frac { F ^ { \prime } } { ( 1 + A ^ { \prime } P ^ { \prime } - P ^ { \prime } ) } } } \end{array} $$ Once $F _ { u p } ^ { \prime }$ is computed, $\begin{array} { r } { F _ { p } ^ { \prime } = \frac { \left| p - f ^ { \prime } \right| } { \left| p ^ { \prime } \right| } } \end{array}$ can be trivially computed as $F _ { u p } ^ { \prime } * A ^ { \prime }$ . Then the equations for the computing the sampling ratios are as given in Equation 8. Table 5: Example: Initial data characteristics Table 6: Data composition of the sampled dataset D’ $$ \begin{array} { r l } & { \frac { \left| p _ { - } f ^ { \prime } \right| } { \left| D ^ { \prime } \right| } = P ^ { \prime } * F _ { p } ^ { \prime } } \\ & { \frac { \left| p _ { - } u f ^ { \prime } \right| } { \left| D ^ { \prime } \right| } = P ^ { \prime } * \left( 1 - F _ { p } ^ { \prime } \right) } \\ & { \frac { \left| u p _ { - } f ^ { \prime } \right| } { \left| D ^ { \prime } \right| } = \left( 1 - P ^ { \prime } \right) * F _ { u p } ^ { \prime } } \\ & { \frac { \left| u p _ { - } u f ^ { \prime } \right| } { \left| D ^ { \prime } \right| } = \left( 1 - P ^ { \prime } \right) * \left( 1 - F _ { u p } ^ { \prime } \right) } \end{array} $$ Example. Given a dataset $D$ and parameters $\alpha , \beta , \gamma$ , we demonstrate how the size of the partitions is determined from Equation 8 to obtain the sample. Suppose that the initial dataset $D$ has the distribution as given in Table 5. For the parameters $\alpha = 0 . 5 , \beta = 0 . 8 , \gamma = 0 . 4$ , by using Equation 8, we compute the sampling ratios and the corresponding upper bounds on $| D ^ { \prime } |$ as follows: $$ \begin{array} { c } \bullet \begin{array} { r } { { \frac { \left. p _ { - } f ^ { \prime } \right. } { \left. D ^ { \prime } \right. } \approx 0 . 4 2 1 \Longrightarrow \left. D ^ { \prime } \right. \leq \frac { \left. p _ { - } f \right. } { \left. D ^ { \prime } \right. } \approx \frac { 1 9 5 0 0 0 } { 0 . 4 2 } = 4 6 4 2 9 \frac { 1 9 5 0 0 } { 0 . 4 2 1 } = 4 6 3 1 8 } } \\ { { \bullet \begin{array} { r } { { \frac { \left. p _ { - } u f ^ { \prime } \right. } { \left. D ^ { \prime } \right. } \approx 0 . 2 8 4 \Longrightarrow \left. D ^ { \prime } \right. \leq \frac { \left. p _ { - } u f \right. } { \left. D ^ { \prime } \right. } \approx \frac { 5 0 0 } { 0 . 2 8 } = 1 7 6 1 } } \end{array} } } \\ { { \bullet \begin{array} { r } { { \frac { \left. u p _ { - } f ^ { \prime } \right. } { \left. D ^ { \prime } \right. } \approx 0 . 1 7 6 \Longrightarrow \left. D ^ { \prime } \right. \leq \frac { \left. u p _ { - } f \right. } { \left. u p _ { - } / 1 \right.} \approx \frac { 1 9 0 0 } { 0 . 1 7 6 } = 1 1 7 6 0 } } \end{array} } } \\ { { \bullet \begin{array} { r } { { \frac { \left. u p _ { - } u f ^ { \prime } \right. } { \left. D ^ { \prime } \right. } \approx 0 . 1 7 6 \Longrightarrow \left. D ^ { \prime } \right. \leq \frac { \left. u p _ { - } f \right. } { \left. D ^ { \prime } \right. } \approx \frac { 1 9 0 0 } { 0 . 1 7 } = 1 1 1 7 6 \frac { 1 9 0 } { 0 . 1 7 6 } = 1 0 7 9 5 } } } \\ { { \bullet \frac { \left. u p _ { - } u f ^ { \prime } \right. } { \left. D ^ { \prime } \right. } \approx 0 . 1 2 0 \Longrightarrow \left. D ^ { \prime } \right. \leq \frac { \left. u p _ { - } u f \right. } { \left. D ^ { \prime } \right. } \approx \frac { 1 0 p _ { - } u f } { 0 . 1 2 0 } = 8 3 3 } } \end{array} } \end{array} \end{array} $$ Although computing the ratios on $( \alpha , \beta , \theta ) \ = \ ( 0 . 5 , 0 . 8 , 0 . 4 )$ yields the tightest upper bound on $| D ^ { \prime } |$ as 909, computing on all $( \alpha , \beta , \theta ) \in \{ 0 , 0 . 0 1 , 0 . 0 2 , 1 \} ^ { 3 }$ might yield a much tighter upper bound. Indeed performing the iteration on $( \alpha , \beta , \theta ) \in$ $\{ 0 , 0 . 0 1 , 0 . 0 2 , , 1 \} ^ { 3 }$ yields a lower upper bound of $| D ^ { \prime } | \leq 3 9 4$ . Using this lower bound, the sample composition is as given in Table 6. # 5.2 Grid Search to Find the Optimal Balance Structure Given that the balance structure in the sample is expressed using the parameters $\alpha , \beta$ and $\gamma$ , grid search is used for finding the parameter values that produce the optimal results with respect to fairness and classification accuracy. Since the problem incurs two metrics to optimise, $\mathop { D I } \mathop { R A T I O }$ for fairness and MCC metrics for classification accuracy by, the solution is obtained by multi-criteria optimisation approach. The loss functions related to $\_ I { 1 . R A T I O }$ and $M C C$ as given in Equation 9 and Equation 10, respectively. $$ D I _ { - } R A T I O _ { - } L O S S = | 1 - D I _ { - } R A T I O | $$ $$ M C C \_ L O S S = | 1 - M C C | $$ A combined loss function is defined as in Equation 11. $$ \begin{array} { l } { { C O M B I N E D . L O S S = } } \\ { { { c _ { 1 } } * M C C . L O S S + { c _ { 2 } } * D I . R A T I O . L O S S } } \end{array} $$ Since the proposed approach is model-agnostic, the sampling approach can be applied before any fairness method, and it can be used for constructing the training dataset of any classification model. The $_ { D I \_ R A T I O }$ and $M C C$ values can then be obtained for any given classifier. In this study, Logistic Regression (LR), Random Forest (RF), Support Vector Machine (SVM), and Naive Bayes (NB) classifiers are used. Given the dataset $D$ and the balance structure parameters ( $\alpha , \beta$ and $\gamma$ ), we can inspect the performance of a given classification model by Algorithm 1. Data: $\alpha , \beta$ , $\gamma$ model, combined loss coefficients $c 1 , c 2$ $D ^ { \prime } \gets S A M P L E ( D , \alpha , \beta , \gamma )$ ; MO $) D E L \gets T R A I N M O D E L ( m o d e l , D ^ { \prime } . t r a i n i n g )$ ; MODEL.evaluate(D’ validate, D’ test); return(MODEL.MCC LOSS, MODEL.DI RATIO LOSS, MODEL.COMBINED LOSS); Algorithm 1: Model Inspection In this algorithm, initially, a sample $D ^ { \prime }$ is constructed. Then the classification model is trained by using the training partition of $D ^ { \prime }$ , and then it is evaluated on the validation and test partitions of $D ^ { \prime }$ . This evaluation returns the loss values as well as the combined loss value of the model. Grid Search. Grid search is typically used for hyper-parameter optimization in ML methods. Given a set of values for each of the hyper-parameters of the method, the grid search algorithm trains a model using every combination of these values and evaluates the performance of each model version. The optimal values for the hyper-parameters are then chosen based on the performance of the model versions [40]. In our study, the search space involves the parameters of $\alpha , \beta$ and $\gamma$ , which have continuous values in $[ 0 , 1 ]$ . Hence, this range is divided into equal intervals, and the values constituting the intervals are used as the set of values for the parameters in the grid search. We denote this grid search as Grid search - level $\boldsymbol { \theta }$ . In the experiments, $[ 0 , 1 ]$ is divided into intervals of size 0.1. In order to refine the obtained results further, another round of grid search is conducted around the top $k$ points obtained at search level 0. This second round of search conducted for refinement is denoted as Grid search - level 1. In the experiments, we use the top 5 points obtained in level 0 and the neighbourhood of the selected point on both sides is further divided into equivalent intervals of size 0.01. Optimal Solution and Pareto Front. The conducted grid search finds the best parameter values optimising the combined loss. Furthermore, we can describe the set of best solution(s) as the Pareto optimal for $M C C \_ L O S S$ and $D I _ { - } R A T I O _ { - } L O S S$ . The set of Pareto optimal solutions (Pareto Front) is defined as a set of solutions such that no objective can be improved without sacrificing at least one other objective [41]. Pareto fronts can be used to pick the best balance between fairness and how well a model performs. Instead of just trying to get the highest accuracy, it is possible to choose a model set-up that is on the Pareto front. This allows one to pick a tradeoff that makes sense for their specific needs. Various points on the Pareto front can be explored to see what might happen in different situations. For example, what if fairness becomes 20% more important than accuracy? The Pareto front can be used for such kind of analysis. Wider Pareto fronts suggest that the model is more stable, even if you change how you balance data or define fairness. Because of this, models with broader Pareto regions are better choices for real-world use. In those situations, the data might change over time, or the rules might evolve. In applications where there are legal or ethical rules for fairness (like a DI-score of 0.8 or higher in hiring or giving out loans), the Pareto front can be utilized to find settings that meet these rules while still getting the best possible prediction results. Within the scope of the proposed approach, the objectives are $D I _ { - } R A T I O _ { - } L O S S$ and $M C C \_ L O S S$ . Hence, the Pareto front is defined as given in Equation 12. $$ \begin{array} { r l } & { S _ { - } P a r e t o = \{ p \in S : \sharp ( p ^ { \prime } \in S ) \ \mathrm { s . t . } } \\ & { p ^ { \prime } [ D I _ { - } R A T I O _ { - } L O S S ] < p [ D I _ { - } R A T I O _ { - } L O S S ] \ \mathrm { a n d } } \\ & { p ^ { \prime } [ M C C _ { - } L O S S ] < p [ M C C _ { - } L O S S ] \} } \end{array} $$ Here, $p$ and $p ^ { \prime }$ are results from the Model Inspection (as given in Algorithm 1) and $S$ is the set of all Model Inspection results. Also, note that some $p$ minimizing the COMBINED LOSS is guaranteed to be in $S _ { - } P a r e t o$ . Thus, the goal of minimizing the $C O M B I N E D \_ L O S S$ can be considered as a method of selecting a desirable instance from S P areto. We can define S P areto through $\mathit { C O M B I N E D - L O S S }$ as given in Equation 13. $$ \begin{array} { r l } & { S _ { - } P a r e t o = \{ p \in S : \exists ( c _ { 1 } , c _ { 2 } > = 0 ) \mathrm { ~ s . t . ~ } } \\ & { ( \nexists ( p ^ { \prime } \in S ) \mathrm { ~ s . t ~ } } \\ & { p ^ { \prime } \mathrm { ~ h a s ~ l o w e r ~ C O M B I N E D \mathrm { . L O S S ~ t h a n ~ p ~ } ~ } } \\ & { \mathrm { f o r ~ c o e f f i c i e n t s ~ } c _ { 1 } , c _ { 2 } ) \} } \end{array} $$ S P areto is the set of results that contain optimal results that minimize the respective COMBINED LOSS under $c _ { 1 }$ and $c _ { 2 }$ . When these results are visualized in a graph, the Pareto front presents an overview of the values for the given two loss functions with respect to each other, which is useful for evaluating the degree of trade-off between the metrics. For this reason, we present the results of the analysis in Section 6, in terms of Pareto front, as well as the optimal solution for given $c _ { 1 }$ and $c _ { 2 }$ coefficients. Note that the proposed solution is also applicable for a singly imbalanced case, which is a simpler version, where at least one of the parameters is set to 0 as the default value. # 6 Experiments and Results # 6.1 Experiment Settings In the experiments, the effectiveness of the proposed approach is analyzed by using supervised learning methods – Logistic Regression (LR), Random Forest (RF), Support Vector Machine (SVM) and Naive Bayes (NB) – with imbalanced data. Additionally, LFR is used in the experiment in order to investigate the effect of the proposed method when used together with another debiasing method developed for balanced datasets. In the experiments, $c _ { 1 }$ and $c _ { 2 }$ coefficients in the COMBINED LOSS are set to 1. # 6.2 Data Pre-processing for the Classifiers Since the dataset includes categorical features, a data pre-processing step is needed to be able to use the classifiers that use numeric features, such as LR and SVM. To this aim, categorical features are represented with one-hot encoding, and then the encoded categorical data are appended to the numerical features. A standard scaler, which was trained over each sampled $D ^ { \prime }$ is also applied as a part of pre-processing to further improve the results. Since such classifiers output probabilities, we need to determine a threshold value for mapping these probabilities to class label predictions. This parameter is also optimized within the grid search. # 6.3 Analysis: Optimal Results In the experiments conducted in order to determine the optimal balanced ratios, due to efficiency considerations, a sample of 5000 instances drawn from the original dataset is used. This sampled collection is divided into training, level 0 test and level 1 test partitions2 with ratios of $6 0 \%$ , $2 0 \%$ , $2 0 \%$ , respectively. The sampling during the optimal balance ratio search per model is done on the training partition. Test partitions are the same for all experiments. Table 7: Performance Results: Optimal Parameters Table 8: Performance Results: Combined Loss, DI Ratio, MCC, Precision, Recall and F1 Values on a subset of the BAF dataset. (Precision, Recall and F1 values are obtained for the Fraud label.) Optimal data balance ratios obtained for different supervised learning techniques, and expressed through $\alpha , \beta$ and $\gamma$ parameters, are presented in Table 7. Note that these parameters assume a value in $\lfloor 0 , 1 \rfloor$ , where 0 denotes no change in the distribution and 1 denotes an equilibration (cf. Section 5.1). In the table, optimal points obtained by both levels of the grid search are presented. The prominent observations on the obtained optimal parameters can be given as follows: Table 9: Performance Results: Analysis on the Larger Test Collection of BAF dataset. (Precision, Recall and F1 values are obtained for the Fraud label.) In this experiment, the test data has the same size ( $1 0 \%$ of the original dataset) and the same distribution as in the one used in Table 4 in terms of size and distribution. • In some of the models, Level 1 parameters are refined versions of the Level 0 parameters, as seen in the RF and NB results. In LFR, the optimal parameters (0.1, 0.9, 0.6) are further refined to (0.14, 0.90, 0.66) through a search in narrower intervals around Level 0 optimal points. On the other hand, for models such as LR and NB, the Level 0 and Level 1 optimal points appear to be vastly independent of each other, meaning that the Level 1 search finds the best point around a point other than the best point found in Level 0. There are overlaps for the optimal points obtained for some of the models, such as $\gamma$ parameters of LR and RF for Level 0, or SVM and LFR for Level 1. However, there is no consensus on the optimal parameters for all learning models used in the study. This indicates that the behaviour of the model is determining the optimal balance ratios in the dataset. The fairness and classification accuracy performance obtained when the training dataset is balanced according to the optimal parameters is given in Table 8 for the different classifiers. The main observations on these results are as follows: • When the performance results for the optimal parameters obtained by Level 0 and Level 1 are compared, it is seen that the finer-grained optimal results provide improvement in for MCC in almost all metrics for all the models, and a slight but acceptable decrease in DI Ratio. • When compared to the fairness and classification accuracy values obtained without debiasing (given in Table 3), a clear improvement is observed in terms of MCC and accuracy for NB. (a) Pareto front with LR method (b) Pareto front with RF method with Grid Search Level 0 and Level 1 with Grid Search Level 0 and Level 1 • When compared to the results obtained by LFR for Double balanced and Unfavourable balanced cases in our exploratory analysis (given in Table 4), LFR results in Table 8 show an improvement for DI ratio, moving it closer to the optimal value, from 1.18 to 0.96. Hence, our proposed data balancing approach provides improvement when applied together with another debiasing method, LFR. Fig. 1: Parento front analysis of the models on BAF dataset In Table 9, further analysis is conducted on a larger test collection whose size and distribution are the same as in the one used in the analysis presented in Table 4. In this experiment, the models that are trained with the optimal balancing setup for each model are used. When we compare the results obtained by LFR in Table 9 against Double imbalanced and Privilege-balanced cases, we observe improvements in both debiasing and classification performance. For the comparison against Double balanced and Unfavourable balanced setups in Table 4, we observe an improvement in MCC from 0.059 to 0.070 and in F1 from 0.04 to 0.05, while keeping the DI Ratio in the acceptable region. The conducted analysis answers RQ4: an optimal degree of balancing in terms of both the unfavourable label and the unprivileged groups provides improved fairness (DI Ratio). Although in posing RQ4, the expectation was to see a loss in classification accuracy, slight increases can be realised in terms of MCC and F1 score, which we also see in Table 8. Additionally, we see a nearly consistent performance when a larger test data collection is used for evaluation under the same optimal balance parameters. When the results in Table 8 and Table 9 are compared, the metric values remain similar except that DI Ratio and Combined Loss tend to be somewhat lower and higher respectively in Table 9. Since the balance ratio parameters are determined independently of this test collection, the amount of observed deterioration can be considered an expected and acceptable result. # 6.4 Analysis: Pareto Fronts The Pareto fronts obtained for LR, RF, SVM, NB and LFR methods are given in Figure 1. In the sub-figures, the Pareto front is presented as a graph of $M C C \bot o s s$ vs. $\_ D I \_ L o s s$ . Note that these loss values (cf. Equations 9 and 10) are the complements of the MCC and DI Ratio metrics such that the best case is at loss 0. In the figures, Pareto fronts obtained from both Level 0 (blue line) and Level 1 (red line) of the grid search are presented. Additionally, loss values obtained at the optimal parameters are marked as dots on the Pareto front line. The prominent observations on these figures are as follows: • The Pareto fronts obtained at Level 1 search have decreased loss values compared to Level 0, which shows the improvement due to finer-grained search. However, the structure of the front line and the gap between the front lines of different levels vary with the supervised learning method. The gaps are larger for LR, RF and SVM, whereas for NB and LFR, the improvement in loss is limited. The Pareto front graphics show the trade-off between MCC and DI Ratio values and the limits of the best loss one can get on these metrics. For example, in Figure 1a, it is seen that, for the LR method, MCC Loss cannot be reduced more than 0.78 at the expense of an increase in $\_ D I _ { \_ } D s s$ up to 0.6. On the other hand, $D I _ { - } L o s s$ can be reduced up to 0. This information is helpful for a user to see the nature of the dataset and the performance limits of the model, and set the coefficients $c 1$ and $c 2$ in the overall loss. # 6.5 Experiments with Additional Datasets In Section 6.3 and Section 6.4, the effectiveness of the proposed approach is analysed on the BAF dataset. To demonstrate the generalization of the approach, additional experiments are conducted on two other datasets. # 6.5.1 Vehicle Insurance Claim Fraud Dataset (VIF) Vehicle Insurance Claim Fraud Detection 3 dataset also has doubly-imbalanced nature, consisting of 15420 instances, with a fraud ratio of $5 . 9 9 \%$ . Upon further inspection, it is seen that the dataset can be partitioned by the sensitive attribute $S e x$ , where we see a DI Ratio of 1.45 if we consider instances with the attribute $S e x = F e m a l e$ as privileged. The dataset is also imbalanced with respect to the sensitive attribute $S e x$ , where only $1 5 . 6 9 \%$ of the instances are labeled as Female. Table 10 shows the performance metrics of basic classifiers for fraud detection task on the VIF dataset. It can be seen that LR, RF, and SVM fail to identify almost no fraud instances, while NB randomly misidentifies non-fraud instances as fraud, resulting in a worse performance as well as a worse DI Ratio. Table 10: The results of the classification algorithms without debiasing and sampling on the VIF dataset. Precision, recall and F1 results are for the Fraud label. Table 11 shows the performances of basic classifiers and LFR for the same 4 sampling strategies used on the BAF dataset shown in 2, updated to reflect the original privilege and fraud ratios for the VIF dataset. Similar to the preliminary analysis conducted on the BAF dataset, we see that balancing the dataset yields usable classifiers with promising DI Ratio values. It should be noted that, since LFR does not scale well with the number of attributes for each data instance, we have selected a subset of attributes for the dataset using Information Gain and Gain Ratio methods for attribute selection in order the accurately represent the dataset. Finally, the results of our proposed method on the VIF dataset are shown in Tables 12 and the 13, with the Pareto fronts for each method shown in Figure 2. Comparing Table 12, which shows the performance of the proposed method, with Table 10, we see a clear increase in classification performance by the MCC and F1 values, and an improved fairness by looking at the DI Ratio values. When compared with Table 11, we see an overall improvement for DI Ratio, and an increase in MCC value for LFR model. Table 11: Results of basic classifiers and LFR for 4 sampling strategies on the VIF dataset. # 6.5.2 Credit Card Fraud Dataset (CCF) The Credit Card Fraud Detection 4 dataset consists of 307511 instances, with a fraud percentage of 8.07. Upon further inspection, it is seen that the dataset can be partitioned by the sensitive attribute $C O D E _ { - } G E N D E R$ , where we see a DI Ratio of 1.45 if we consider instances with the attribute $C O D E _ { - } G E N D E R = F$ (female) as privileged. The dataset has a slight imbalance with respect to $C O D E _ { - } G E N D E R$ attribute, with $6 5 \%$ of the instances labeled as $F ^ { \prime }$ (female) and $3 5 \%$ of the instances labeled as $M$ (male). Table 14 shows the performance metrics of basic classifiers for the fraud detection task on the CCF dataset. It can be seen that LR, SVM, and RF fail to identify any fraud instances, while NB randomly misidentifies non-fraud instances as fraud, resulting in a worse performance as well as a worse DI Ratio. Table 15 shows the performances of basic classifiers and LFR for the same 4 sampling strategies used on the BAF dataset shown in 2, updated to reflect the original privilege and fraud ratios for the CCF dataset. Similar to the preliminary analysis conducted on the BAF dataset, we see that balancing the dataset yields usable classifiers with promising DI Ratio values. It should be noted that, since LFR does not scale well with the number of attributes for each data instance, we have selected a subset of attributes for the dataset using Information Gain and Gain Ratio methods for attribute selection in order the accurately represent the dataset. Table 12: Performance results of the proposed method on the VIF dataset. (Precision, Recall and F1 values are obtained for the Fraud label.) The performance results of our proposed method on the CCF dataset are shown in Tables 16 and 17, with the Pareto fronts for each method shown in Figure 3. With almost perfect DI Ratio scores, it can be seen that the proposed method can achieve good classification performance while taking the fairness of the models into account. Additionally, since the CCF dataset has a slightly lower imbalance with respect to the privilege classes, the obtained results also imply that this method can also be used in a single-imbalance setting. The plots showing the Pareto front for the VIF and CCF datasets (in Figure 2 and Figure 3, respectively) complement the results found with the BAF dataset. They also give us more insights into how well a model performs versus how fair it is, across different types of data. For the VIF dataset, using a Level 1 grid search consistently made the models perform better. A noticeable drop can be seen in overall errors and clearer lines showing the trade-offs. Models like RF and LR showed broader and more distinct Pareto fronts. This means they are more flexible in balancing how fair they are with how well they predict. However, for NB and LFR models, the Pareto fronts were narrower. This suggests they have less room to improve both fairness and prediction at the same time. Furthermore, SVM, which failed to detect fraud cases at first (as seen in Table 11), improved with our balancing method and shows good, Pareto-optimal results in Figure 2. For the CCF dataset, the LFR model did not learn well due to the imbalance in data (as shown in Table 15). It created very narrow Pareto fronts. While balancing helped a bit, its performance was still not as good as other methods. On the other hand, RF and SVM had the widest Pareto regions. This indicates they are more adaptable improving fairness without losing much of their ability to predict correctly. It is also interesting that the DI Ratio Loss stayed very close to zero in many setups, especially with the Level 1 search. This shows that our proposed method can achieve almost perfect fairness across different groups while still performing well, particularly with data that is not too imbalanced. Table 13: Performance Results: Analysis on the Larger Test Collection of VIF dataset. (Precision, Recall and F1 values are obtained for the Fraud label.) Table 14: The results of the classification algorithms without debiasing and sampling on the CCF dataset. Precision, recall and F1 results are for the Fraud label. Models such as RF and SVM generally create broader and more varied Pareto fronts. This means they’re more flexible when a good balance between fairness and accuracy is needed. On the other hand, models such as LFR and NB often show narrower or broken-up Pareto fronts. This tells us they do not have much room to improve fairness without making their predictions worse. When level 1 grid search is used, it consistently made the Pareto front wider and better across all datasets. This suggests that using these nested balancing strategies is really important, especially when the models are sensitive to imbalances both within specific groups and across different classes. (e) Pareto front with LFR method with Grid Search Level 0 and Level 1 Fig. 2: Parento front analysis of the ML models on the VIF dataset
Fairness has been identified as an important aspect of Machine Learning and Artificial Intelligence solutions for decision making. Recent literature offers a variety of approaches for debiasing, however many of them fall short when the data collection is imbalanced. In this paper, we focus on a particular case, fairness in doubly imbalanced datasets, such that the data collection is imbalanced both for the label and the groups in the sensitive attribute. Firstly, we present an exploratory analysis to illustrate limitations in debiasing on a doubly imbalanced dataset. Then, a multi-criteria based solution is proposed for finding the most suitable sampling and distribution for label and sensitive attribute, in terms of fairness and classification accuracy
[ "cs.LG", "cs.CY" ]
# 1. Introduction Machine learning (ML) systems are typically designed under the assumption that the training and test sets are sampled from the same statistical distribution. However, this often does not hold in practice. For example, during deployment, test data may include previously unseen classes. In such cases, the ML system may produce incorrect results with high confidence (DeVries & Taylor, 2018). Therefore, it is crucial to develop methods that enable ML systems to detect out-of-distribution (OOD) data. Detecting OOD data allows users to be alerted of potentially unreliable predictions and enables the system to adapt accordingly. OOD detection has gained considerable attention recently (Yang et al., 2022). Recent state-of-the-art (SoA) (Sun et al., 2021; Djurisic et al., 2022; Ahn et al., 2023; Sun & Li, 2022; Zhao et al., Towards this goal, we develop a theory to formulate OOD features based on underlying statistical distributions of ID and OOD distributions. We develop a novel loss functional, based on information theory, defined on the set of OOD features whose optimization yields OOD features as a function of the underlying statistical distributions. Unlike current approaches, our OOD features are random and thus follow a statistical distribution. The mean value models the deterministic shaping features in the literature. Our loss aims to determine the OOD feature that maximally separates resulting ID and OOD feature distributions through the Kullback-Leiber (KL) divergence. As separating distributions by itself is ill-posed, we propose a novel use of the Information Bottleneck (IB) (Tishby et al., 2000) as regularization. In our use, IB seeks compressed features that preserve the information the data has about OOD, aiming for a feature representation that contains only the information necessary for OOD detection. As this loss functional is defined on probability measures (representing the distribution of the OOD feature), it is an infinite dimensional optimization problem, and thus we use the calculus of variations (Troutman, 2012) to derive the optimization procedure. Our theory offers an explanation of several techniques employed in SoA rule- based approaches, and suggests a new shaping function that out-performs other shaping functions in SoA. There have been recent theories for OOD detection (Zhao et al., 2024; Xu et al., 2023). These works have introduced the novel idea of formulating OOD features through a loss function rather than empirically driven rule-based approaches of the past, and motivates our work. In contrast to the aforementioned works, our theory employs a novel information-theoretic loss function, which offers several advantages. Our theory shows how different assumptions on the OOD distribution lead to different OOD feature shaping approaches. Our theory is able to more accurately offer an explanation for properties of several SoA rule-based approaches as being from different underlying OOD distributions and different regularization (see next section for a more detailed discussion). In summary, our contributions are as follows: 1. We introduce a novel theory and framework for deriving OOD features from neural networks. This involves the formulation of OOD features as a variational problem that formulates OOD features as random features through a novel loss functional that contains two terms, one that maximizes the $K L$ divergence between the random feature under ID and OOD distributions and another term, the Information Bottleneck, which extracts the information from the data that is relevant for OOD detection. 2. We develop the techniques to optimize the loss functional using the calculus of variations, and specifically derive a computationally feasible algorithm in the one-dimensional data case. 3. Using our framework, we show how the OOD shaping functions change based on various data distributions. We relate the mean value of our OOD features to existing OOD shaping functions. 4. We introduce a novel piece-wise linear OOD feature shaping function predicted through our theory, and show that it leads to state-of-the-art results on OOD benchmarks. # 1.1. Related Work We briefly review related work; the reader is referred to (Yang et al., 2022) for a survey. Post-hoc approaches of OOD detection, which are applied to pre-trained models without additional training, have focused on constructing scoring functions to differentiate OOD from in-distribution data, leveraging confidence scores (Hendrycks & Gimpel, 2018a; Zhang & Xiang, 2023; Liang et al., 2020), energybased metrics (Liu et al., 2021; Wu et al., 2023; Elflein et al., 2021) and distance-based measures (Lee et al., 2018; Sun et al., 2022). For example, MSP (Hendrycks & Gimpel, 2018a) used the maximum softmax probability as a confidence score. ODIN (Liang et al., 2020) improved OOD detection by applying temperature scaling and adding small perturbations to input data before computing the maximum softmax probability. (Ren et al., 2019) proposes to use the likelihood ratio, which has been proposed over likelihoods, which do not work well (Kirichenko et al., 2020). (Lee et al., 2018) leveraged Mahalnobis distance to compute the distance between features and classes. KNN (Sun et al., 2022) uses a non-parametric approach. Energy methods (Liu et al., 2021) present an alternative to softmax scores by employing the Helmholtz free energy. Energy scoring has been adopted by several OOD feature-shaping approaches; feature-shaping is the focus of our work. Feature-shaping approaches to OOD detection: Several methods perform OOD detection by computing features of the output of layers of the neural network (Sun et al., 2021; Kong & Li, 2023; Djurisic et al., 2022; Fort et al., 2021b; Zhao et al., 2024) before being input to a score. In ReAct (Sun et al., 2021), the penultimate layer outputs are processed element-wise by clipping large values. It is empirically noted that OOD data results in large spikes in activations, which are clipped to better separate the ID and OOD distributions. BFAct (Kong & Li, 2023) uses the Butterworth filter to smoothly approximate the clipping. ASH computes features by sparsifying intermediate outputs of the network by flooring small values to zero and passing larger values with possible scaling. DICE (Sun & Li, 2022) is another approach to sparsification. Different than elementwise approaches, ASH then does vector processing of the shaped feature before input to a score. VRA (Xu et al., 2023) and (Zhang et al.) derive element-wise shaping functions by an optimization approach. Optimization-based approaches for feature shaping: (Xu et al., 2023) formulates a loss function for deterministic OOD features that aims to separate the means of ID and OOD feature distributions, and regularization is added to keep the OOD feature near the identity through the L2 norm. (Zhao et al., 2024) analyzes a similar loss function but with point-wise rather than L2 regularization. They further offer simplifications to remove the reliance on the OOD distribution. These works have introduced the novel idea of formulating OOD features through a loss function. Our approach offers several advantages. Over (Zhao et al., 2024), we present a framework in which we can study the OOD feature as a function of the underlying OOD distribution. This shows the implicit assumptions in several existing methods. In contrast, (Zhao et al., 2024) aims to remove dependence on the OOD distribution. Our results show that feature shaping can vary as a function of the underlying OOD distribution. Over (Zhao et al., 2024; Xu et al., 2023), our theory offers an explanation of qualitative properties of existing SoA methods. For instance, clipping of large values in OOD features (of ReAct (Sun et al., 2021)) is associated with a higher Information Bottleneck (IB) regularization which is needed for noisier OOD datasets. Negative slope at large values in (Zhao et al., 2024; Xu et al., 2023) is associated with low IB regularization. Also, pruning of small feature values in (Xu et al., 2023; Djurisic et al., 2022) is associated with OOD distributions with heavier tails. See Section 4 for more technical details. # 2. Variational Formulation of OOD Features We formulate OOD features as an optimization problem. For the sake of the derivation, we will assume that the probability distributions of ID and OOD features from the network are given in this section. In practice, the ID can be estimated by training data. In Section 4, we will then study the OOD features under various distributions to show how features vary with distribution and offer plausible assumptions made by existing feature shaping approaches. We will also make reasonable assumptions on the OOD distribution to derive new prescriptive OOD features for use in practice. Current OOD features in the OOD literature are computed by processing features from the neural network through a deterministic function (e.g., clipping). In contrast, we propose to generalize that approach by allowing for random functions. Let $Z$ denote the feature (a random variable) from the network (penultimate or intermediate layer feature). We denote by $\tilde { Z }$ the random OOD feature (a random variable) that we seek to determine. The distribution of $\tilde { Z }$ is denoted $p ( \tilde { z } | z )$ . Thus, rather than solving for a deterministic function $f ( Z )$ , we instead solve for a random feature $\tilde { Z }$ represented through $p ( \tilde { z } | z )$ as in Information Theory (Cover, 1999). Thus, given a feature $z$ , the OOD feature is $\tilde { Z } \sim p ( \tilde { z } | z )$ We will primarily be concerned with the mean value of the distribution in this paper to relate to other feature shaping methods. Let $X$ be the random variable indicating the data (e.g., image, text), and $Y$ be the random variable indicating in- $( Y = 0 )$ ) and out-of- $\mathbf { \nabla } \cdot Y = 1 \mathbf { \dot { \theta } } .$ ) distribution data. Note this forms a Markov Chain $Y X Z { \tilde { Z } }$ . The Markov Chain property is needed to construct one of the terms of our loss function, discussed next. We propose a novel loss functional to design the OOD random feature. This loss functional is defined on $p ( \tilde { z } | z )$ . The first term aims to separate the ID and OOD distributions of the random feature $\tilde { Z }$ . This is natural since we would like to use the OOD feature to separate the data into in or out-of-distribution. To achieve this separation, we propose to maximize the symmetrized KL-divergence between $p ( \tilde { z } | Y = 0 )$ and $p ( \tilde { z } | Y = 1 )$ . Note recent work (Zhao et al., 2024) also seeks to separate distributions, however, differently than our approach as only the means of the distribution are separated. Also, note that $p ( \tilde { z } | Y = y )$ is a function of $p ( \tilde { z } | z )$ , the variable of optimization, and thus the KL term is a function of $p ( \tilde { z } | z )$ . This term is defined as follows: $$ \begin{array} { r l r } & { } & { D _ { K L } ( p ( \widetilde { z } | z ) ) = D _ { K L } [ p ( \widetilde { z } | Y = 1 ) | | p ( \widetilde { z } | Y = 0 ) ] + } \\ & { } & { D _ { K L } [ p ( \widetilde { z } | Y = 0 ) | | p ( \widetilde { z } | Y = 1 ) ] , } \end{array} $$ where $$ \begin{array} { c } { { D _ { K L } [ p | | q ] = \displaystyle \int p ( x ) \log \frac { p ( x ) } { q ( x ) } \mathrm { d } x , \quad \mathrm { a n d } } } \\ { { p ( \tilde { z } | y ) = \displaystyle \int p ( \tilde { z } | z ) p ( z | y ) \mathrm { d } z . } } \end{array} $$ Note that we have used that $p ( \tilde { z } | z , y ) ~ = ~ p ( \tilde { z } | z )$ as the feature is constructed the same for both ID and OOD data. These equations shows the dependence of the OOD feature distributions on $p ( \tilde { z } | z )$ . The $\mathrm { K L }$ divergence is a natural choice for separating distributions and a standard information-theoretic quantity. Unconstrained maximization of KL divergence is ill-posed, and regularization is needed. Also, it is possible to reduce the dimensions of $Z$ to a few dimensions that are maximally separated but remove information necessary to fully characterize OOD data. Therefore, we need to ensure that $\tilde { Z }$ contains all the information relevant to accurately determine OOD data. With these considerations, we aim to compress the dimensions of $Z$ to form a simple/compact feature, but in a way that preserves the OOD information (contained in the variable $Y$ ). To achieve this, we adapt the Information Bottleneck (Tishby et al., 2000). In the Information Bottleneck method, the quantization of a random variable $X$ is considered to form the random variable $T$ in such a way to preserve information about a random variable $Y$ , where $Y$ forms a Markov Chain with $X$ . A functional is formulated such that when minimized forms $T$ . This is precisely the functional we would like to determine $\tilde { Z }$ (where $\tilde { Z }$ is analogous to $T$ and $Z$ is analogous to $X$ ). The second term of our functional, following from (Tishby et al., 2000), is $$ \begin{array} { r } { \begin{array} { r } { \mathbf { I B } ( p ( \tilde { z } | z ) ) = I ( Z ; \tilde { Z } ) - \beta I ( \tilde { Z } ; Y ) , } \end{array} } \end{array} $$ where $I$ indicates mutual information, and $\beta > 0$ is a hyperparameter. The first term of (4) is the compression term that measures the mutual information between $Z$ and $\tilde { Z }$ ; this term is minimized and thus the term favors $\tilde { Z }$ to be a compressed version of $Z$ . The second term maximizes the mutual information between $\tilde { Z }$ and $Y$ , and thus favors $\tilde { Z }$ to retain OOD relevant information. Thus, our combined loss functional is $$ L ( p ( \tilde { z } | z ) ) = - D _ { K L } \big ( p ( \tilde { z } | z ) \big ) + \alpha \mathbf { I B } ( p ( \tilde { z } | z ) ) , $$ which is minimized to determine the conditional distribution of $\tilde { Z }$ , $p ( \tilde { z } | z )$ , and $\alpha > 0$ is a hyperparameter. Our goal is to determine the optimal $p ( \tilde { z } | z )$ , which can then be used with a score function to determine whether data $z$ is OOD or not. Note that we are seeking to optimize over the set of continuous probability distributions, which forms an infinite dimensional optimization problem. To gain intuition into the loss functional above, in particular to see that it forms a well-posed problem and that IB regularization is needed, we analyze a simple case with 1D Gaussian distributions that result in closed form solution in Appendix A. We verify in the next section that the loss functional, for more complex distributions/features, yields well-posed problems and hence result in an optimal solution. # 3. Optimization for OOD Features In this section, we discuss the optimization of the loss functional (5). The loss functional is defined on continuous probability density functions $p ( \tilde { z } | z )$ , where $z , \tilde { z }$ are continuous. This is an infinite dimensional optimization problem, and to find the optimal feature one can use the calculus of variations to determine the gradient of $L$ (Troutman, 2012). Setting the gradient to zero and solving for the probability distribution that satisfies the equation gives the necessary conditions for the optimizer. For our loss, that does not yield a closed form solution and so we instead use the gradient to perform a gradient descent. # 3.1. Loss Under Element-wise Independence of Feature Because formulating numerical optimization for general multi-dimensional distributions is difficult, we make some simplifications to gain insights to our theory and approach. Even with these simplifications, we will show that the approach can explain popular approaches in the literature and lead to a new state of the art approach. Our first simplification (which is similar to element-wise processing assumptions made in existing methods, e.g., (Sun et al., 2021; Zhao et al., 2024)) is to assume that the conditional feature distribution $p ( \tilde { z } | z )$ can be factorized as $\begin{array} { r } { p ( \tilde { z } | z ) = \prod _ { i = 1 } ^ { n } p ( \tilde { z } _ { i } | z ) } \end{array}$ , which assumes conditional independence of the components of $\tilde { z }$ and that each component has the same conditional distribution. We also assume that $\begin{array} { r } { p ( z | y ) = \prod _ { i = 1 } ^ { n } p ( z _ { i } | y ) } \end{array}$ , that is, the components of $z$ are independent conditioned on $y$ . Under these assumptions, the optimization of the loss functional (5) reduces to the optimization of several optimization problems defined on one-dimensional probability distributions from each feature component (see Appendix B for details): $$ \operatorname * { a r g m i n } _ { p ( \tilde { z } _ { i } | z _ { i } ) } L _ { i } ( p ( \tilde { z } _ { i } | z _ { i } ) ) , \quad i \in \{ 1 , \ldots , n \} , $$ where $$ \begin{array} { r l } & { ~ L _ { i } ( p ( \tilde { z } _ { i } | z _ { i } ) ) = - D _ { K L } [ p ( \tilde { z } _ { i } | 0 ) | | p ( \tilde { z } _ { i } | 1 ) ] - } \\ & { ~ D _ { K L } [ p ( \tilde { z } _ { i } | 1 ) | | p ( \tilde { z } _ { i } | 0 ) ] + \alpha [ I ( \tilde { Z } _ { i } ; Z _ { i } ) - \beta I ( \tilde { Z } _ { i } ; Y ) ] . } \end{array} $$ Thus, we next provide an optimization procedure for the loss functionals above, defined on one-dimensional distributions. For simplicity of notation, we now omit the $i$ subscripts. # 3.2. Gradient of Loss Functional We will use gradient descent to optimize the loss functional. Since the problem is non-convex, gradient descent is a natural choice. Given the infinite dimensional problem, we use the calculus of variations to compute the gradient. We perform the computation for the gradient of (5) in Appendix C and summarize the result in the following theorem: Theorem 3.1 (Gradient of Loss). The gradient of $D _ { K L } ( p ( \tilde { z } | z ) ) )$ (1) with respect to $p ( \tilde { z } | z )$ is given (up to an additive function of $z$ ) by $$ \begin{array} { r l } & { \nabla _ { p ( \tilde { z } | z ) } D _ { K L } = p ( z | 0 ) \cdot \left[ l ( z ) \log l ( \tilde { z } ) - l ( \tilde { z } ) \right] } \\ & { \qquad - p ( z | 1 ) \cdot \left[ l ( z ) ^ { - 1 } \log l ( \tilde { z } ) + l ( \tilde { z } ) ^ { - 1 } \right] , } \end{array} $$ where $p ( z | y ) = p ( z | Y = y )$ , $p ( \tilde { z } | y ) = p ( \tilde { z } | Y = y )$ and $$ l ( z ) = \frac { p ( z | 1 ) } { p ( z | 0 ) } , \quad a n d \quad l ( \tilde { z } ) = \frac { p ( \tilde { z } | 1 ) } { p ( \tilde { z } | 0 ) } . $$ The gradient of $I B ( p ( \tilde { z } | z ) )$ (4) is given by $\nabla _ { p ( \tilde { z } | z ) } I B =$ $$ \sum _ { y \in \{ 0 , 1 \} } p ( y ) p ( z | y ) \left[ \log \frac { p ( \tilde { z } | z ) } { p ( \tilde { z } ) } - \beta \log \frac { p ( \tilde { z } | y ) } { p ( \tilde { z } ) } \right] . $$ The gradient of the full loss $L$ in (5) is then $$ \nabla _ { p ( \tilde { z } | z ) } L = - \nabla _ { p ( \tilde { z } | z ) } D _ { K L } + \alpha \nabla _ { p ( \tilde { z } | z ) } I B . $$ To simplify further and study a model that more closely resembles OOD feature shaping functions in the literature, we make the following assumption: $$ p ( \tilde { z } | z ) \sim \mathcal { N } ( \mu ( z ) , \sigma _ { c } ( z ) ) , $$ where $\mathcal { N }$ indicates Gaussian distribution, $\tilde { z } , z \in \mathbb { R }$ and $\mu , \sigma _ { c } : \mathbb { R } \to \mathbb { R }$ are the mean/standard deviation. We use the sub-script c to denote “conditional” to distinguish it from other sigmas used below. We can think of this model as random perturbations of a deterministic feature shaping function $\mu$ . The OOD’s feature mean value for a given network feature $z$ is $\mu ( z )$ . The closer $\sigma _ { c }$ is to zero, the closer the approach is to deterministic feature shaping. Note if the optimization turns out to result in $\sigma _ { c } = 0$ , then deterministic functions would be optimal. In our numerous simulations, this does not happen and thus random OOD features appear to be more optimal. We now compute the gradients with respect to $\mu$ and $\sigma _ { c }$ : Theorem 3.2 (Loss Gradient Under Gaussian Random OOD Feature (12)). The gradient of the loss (5) under (12) is $$ \begin{array} { l } { { \nabla _ { \mu } L ( z ) = \displaystyle \int \frac { \nabla _ { p ( \tilde { z } \mid z ) } L ( \tilde { z } , z ) } { \sigma _ { c } ^ { 2 } ( z ) } [ \tilde { z } - \mu ( z ) ] p ( \tilde { z } \mid z ) \mathrm { d } \tilde { z } ~ ( 1 3 ) } } \\ { { \nabla _ { \sigma _ { c } } L ( z ) = \displaystyle \int \frac { \nabla _ { p ( \tilde { z } \mid z ) } L ( \tilde { z } , z ) } { \sigma _ { c } ( z ) } \left[ \frac { ( \tilde { z } - \mu ( z ) ) ^ { 2 } } { \sigma _ { c } ^ { 2 } ( z ) } - 1 \right] p ( \tilde { z } \mid z ) \mathrm { d } \tilde { z } , } } \end{array} $$ where $\nabla _ { p ( \tilde { z } | z ) } L$ is given in (11). # 3.3. Numerical Optimization of Loss We implement a gradient descent algorithm using a discretization of the continuum equations above. We choose a uniform discretization of the space of $z$ , i.e., $\{ z _ { i } \} _ { i } \subset \mathbb { R }$ . We represent $\mu$ and $\sigma _ { c }$ through their samples: $\mu _ { i } = \mu ( z _ { i } )$ and $\sigma _ { c , i } = \sigma _ { c } ( z _ { i } )$ . We specify formulas for $p ( \tilde { z } )$ and $p ( \tilde { z } | y )$ under the discretization, which will be required in the computation of the approximation to the gradient: $$ \begin{array} { l } { { \displaystyle p ( \tilde { z } | y ) = \sum _ { i } p ( \tilde { z } | z _ { i } ) p ( z _ { i } | y ) \Delta z _ { i } } } \\ { { \displaystyle \qquad = \sum _ { i } \frac { 1 } { \sigma _ { c , i } } G _ { \sigma _ { c , i } } ( \tilde { z } - \mu _ { i } ) p ( z _ { i } | y ) \Delta z _ { i } } } \\ { { \displaystyle p ( \tilde { z } ) = \sum _ { y } p ( y ) p ( \tilde { z } | y ) . } } \end{array} $$ Thus, $p ( \tilde { z } \vert y )$ is approximated as a mixture of Gaussians. The gradient descent is shown in Algorithm 1, which assumes ID and OOD distributions and determines the Gaussian random feature parameterized through $\mu$ and $\sigma _ { c }$ . The complexity for this optimization (which is done off-line in training) is $\mathcal { O } ( N M K )$ where $N$ is the number of samples of $p ( z | y )$ , $M$ is the samples of $p ( \tilde { z } | z )$ and $K$ is the number of gradient descent iterations. On a standard single GPU, this took less than a minute. # 4. A Study of OOD Features vs Distribution In this section, we study the resulting OOD features based on various choices of distributions using the algorithm in the previous section, and relate these choices to OOD feature shaping techniques that are present in the literature. Note that while in practice the OOD distribution is unknown, our theory nevertheless suggests the underlying distributional assumptions of existing methods. This is useful to understand when these methods will generalize as a function of the type of OOD data. We will also derive a generic OOD shaping function, encompassing properties of several distributions, and show that this shaping function can lead to SoA performance in the next section. Note in practice, we have observed that distributions from OOD datasets to exhibit similarities to the distributions studied, see Appendix H. We will further rationale on studying these distributions below. For this study, we will assume the assumptions of Section 3.3, i.e., that the OOD features are element-wise independent and that the OOD feature is Gaussian, i.e., $p ( \tilde { z } | z ) \sim \mathcal { N } ( \mu ( z ) , \sigma _ { c } ( z ) )$ . We will further assume that the ID distribution is Gaussian, i.e., $p ( z | 0 ) \sim \mathcal { N } ( \mu _ { 0 } , \sigma )$ . We make this assumption for simplicity and that features in network layers can be approximated well with a Gaussian, as evidenced empirically in ( $\mathrm { \Delta X u }$ et al., 2023). We will study three OOD distributions next: Gaussian, Laplacian and a distribution we propose based on the Inverse Gaussian. # Algorithm 1 1D Gaussian Random Feature Computation Input: IN/OOD Distributions $p ( z | y ) , \alpha , \beta$ and learning rate $\eta$ Output: Converged mean $\mu _ { i }$ , std $\sigma _ { c , i }$ for each $i$ Initialize: $\mu _ { i } = z _ { i }$ , $\sigma _ { c , i } = \mathrm { c o n s t }$ for $\scriptstyle n$ iterations do for $z _ { i }$ do Compute a discretization of $\tilde { z }$ in its likely range: $\tilde { z } _ { i } ^ { j } \in$ $( \mu _ { i } - k \sigma _ { c , i } , \mu _ { i } + k \sigma _ { c , i } )$ where $k \geq 3$ for $\tilde { z } _ { j } ^ { i }$ $$ \begin{array} { r l } & { \quad p ( z _ { i } | 0 ) \cdot \left[ l ( z _ { i } ) \log l ( \tilde { z } _ { j } ^ { i } ) - l ( \tilde { z } _ { j } ^ { i } ) \right] - } \\ & { \quad p ( z _ { i } | 1 ) \cdot \left[ l ( z _ { i } ) ^ { - 1 } \log l ( \tilde { z } _ { j } ^ { i } ) + l ( \tilde { z } _ { j } ^ { i } ) ^ { - 1 } \right] + } \\ & { \alpha \underset { y \in \{ 0 , 1 \} } { \sum } p ( y ) p ( z _ { i } | y ) \left[ \log \frac { p ( \tilde { z } _ { j } ^ { i } | z _ { i } ) } { p ( \tilde { z } _ { j } ^ { i } ) } - \beta \log \frac { p ( \tilde { z } _ { j } ^ { i } | y ) } { p ( \tilde { z } _ { j } ^ { i } ) } \right] } \end{array} $$ # end for Compute $\nabla _ { \mu } L ( z _ { i } ) =$ $$ \sum _ { j } \frac { \nabla _ { p ( \tilde { z } | z ) } L ( \tilde { z } _ { j } ^ { i } , z _ { i } ) } { \sigma _ { c , i } ^ { 2 } } ( \tilde { z } _ { j } ^ { i } - \mu _ { i } ) p ( \tilde { z } _ { j } ^ { i } | z _ { i } ) \Delta z _ { i } $$ Compute $\nabla _ { \sigma _ { c } } L ( z _ { i } ) =$ $$ \sum _ { j } \frac { \nabla _ { p ( \tilde { z } | z ) } L ( \tilde { z } _ { j } ^ { i } , z _ { i } ) } { \sigma _ { c , i } } \left[ \frac { ( \tilde { z } _ { j } ^ { i } - \mu _ { i } ) ^ { 2 } } { \sigma _ { c , i } ^ { 2 } } - 1 \right] p ( \tilde { z } _ { j } ^ { i } | z _ { i } ) \Delta z _ { i } $$ end for for $z _ { i }$ do $$ \begin{array} { c } { \mu _ { i } \mu _ { i } - \eta \nabla _ { \mu } L ( z _ { i } ) } \\ { \sigma _ { c , i } \sigma _ { c , i } - \eta \nabla _ { \sigma _ { c } } L ( z _ { i } ) } \end{array} $$ end for end for Gaussian OOD: First, we study the case of a Gaussian for the OOD distribution, as its the most common distribution in probabilistic analysis. Let $p ( z | 1 ) \sim \mathcal { N } ( \mu _ { 1 } , \sigma )$ . For illustration, we choose $\mu _ { 0 } = - 0 . 5 , \mu _ { 1 } = 0 . 5$ and $\sigma = 0 . 5$ and $\alpha = 1 . 0 , \beta = 1 0$ . The resulting converged result of the optimization for $\mu$ and $\sigma _ { c }$ is shown in Figure 1 (positive part). No feature shaping would mean that $\mu$ is the identity map, and $\sigma _ { c } = 0$ ; this solution is plotted in dashed blue. Notice that the optimal mean value is not the identity. The mean indicates that the feature has positive slope for small values of $| z |$ (similar to (Sun et al., 2021)) and negative slope for large values of $| z |$ (similar to (Zhao et al., 2024)). In Appendix E.2, we show that under different distribution parameters, one can get negative values for small $| z |$ as in (Zhao et al., 2024). Interestingly, the optimal standard deviation $\sigma _ { c } ( z )$ is non-zero, indicating randomness in this case is beneficial in terms of the loss. In fact, in all of our simulations across distributions and their hyperparameters, we’ve observed non-zero standard deviation. Figure 1. OOD Gaussian Feature Under Gaussian ID/OOD Distributions. Mean (left), standard deviation (right) of the feature. In Figure 2(a), we show the effects of the Information Bottleneck weight $\alpha$ . The impact of $\beta$ on the shape is studied in Appendix E.1. For $\alpha$ larger (higher regularization), the mean of the feature becomes flat for $| z |$ large, similar to clipping that is used in popular methods (Sun et al., 2021; $\mathrm { { X u } }$ et al., 2023).1 See Figure 3 for a plot of existing feature shaping methods. Even under the simplifying Gaussian assumptions, we see that the our shaping functions have properties of existing methods. Laplacian OOD Distribution: Next, we consider the Laplacian distribution for the OOD distribution, i.e., $p ( z | 1 ) =$ 21b exp (−|z − µ1|/b). The intuition for choosing this distribution is that it has a heavier tail than the Gaussian, and thus, is better able to model outliers, and it seems reasonable OOD data would be considered outliers. We show the result of the mean of the feature in Figure 2(b). We notice that when $| z |$ is small, the mean OOD feature is zero, which indicates a suppression of low values (this is used in VRA ( $\mathrm { \Delta X u }$ et al., 2023) and ASH (Djurisic et al., 2022)). Note that this is consistent across $\alpha$ values, larger values increases the suppression region. We also see that large values of $| z |$ are being clipped or suppressed (with larger $\alpha$ ) approaching a zero slope. The jump discontinuity is also present in VRA and ASH. There also appears to be a positively sloped linear function for intermediate values of $| z |$ , similar to VRA. Inverse Gaussian OOD Distribution: Next, we consider a distribution that may be a distribution that generically holds for OOD data and can be used in the absence of prior information of the OOD distribution. If the ID distribution is Gaussian, we can formulate a distribution that has high probability outside the domain that the ID distribution has high probability. To this end, one can consider a variant of the Inverse Gaussian defined as follows. Let $d ( z ) = | z - \mu _ { 0 } | / \sigma$ where $\mu _ { 0 } , \sigma$ are the mean and standard deviation of the ID distribution. This is a distance to the ID distribution. We would like the OOD distribution to have high probability when $d ( z )$ is large, and thus we consider $p ( z | 1 ) \sim I G ( d ( z ) ; \mu _ { 1 } , \lambda )$ where IG denotes the inverse Gaussian distribution: $$ p _ { I G } ( x ; \mu , \lambda ) = \sqrt { \frac { \lambda } { 2 \pi x ^ { 3 } } } \exp \left( - \frac { \lambda ( x - \mu ) ^ { 2 } } { 2 \mu ^ { 2 } x } \right) , $$ which is plotted in Appendix D. Note that there is some overlap of this distribution with the ID Gaussian. As noted in Figure 2(c), the Inverse Gaussian distribution results in a qualitatively similar OOD feature compared to the Laplacian distribution: suppression of small $| z |$ values and clipping/flattening of large $| z |$ values and a positively sloped linear function for intermediate values of $| z |$ . For $\alpha$ large we have flattening similar to clipping and $\alpha$ smaller results in a negative slope similar to the other distributions. We summarize key observations. Clipping as done in ReAct seems to be a universal property across all the OOD distributions for large regularization. In the next section we show that for noiser OOD datasets larger regularization is beneficial, and so the clipping mitigates noise, which is noted in (Sun et al., 2021). Next, the OOD distributions that are heavier tailed result in suppression (zeroing out) of low $| z |$ values. This is consistent with the VRA and ASH methods. All distributions yield a positively sloped region for intermediate values of $| z |$ . Our results suggest ReAct and FS-OPT may be operating under an implicit Gaussian OOD assumption for high regularization (ReAct) and low regularization (FS-OPT). VRA and ASH seem to implicitly assume heavier tailed OOD distributions. Piecewise Linear Shaping: The above mean shaping functions (from Gaussian, Laplace and Inverse Gaussian OOD distributions) all approximately fit in a particular piecewise linear function family as shown in Figure 4, where $z _ { 1 } , z _ { 2 } , y _ { 0 } , y _ { 1 a } , y _ { 1 b } , m _ { 1 } , m _ { 2 }$ are hyperparameters. Therefore, in practice, if the distribution is unknown, one can choose this family of shaping functions that would implicitly assume any of the aforementioned three distributions. Because many existing SOA methods implicitly make one of the three distributional assumptions, this family makes more general distributional assumptions than existing SOA, thus potentially offering generalization to more OOD datasets while not being too general so as to lose discriminability. In the experimental section we explore this family of shaping functions, and show we can obtain SoA results. # 5. Implementation of New OOD Detection In this section, we provide the implementation details for our new approaches to OOD detection, using the simplifying assumptions presented in Section 3. We provide the details for two cases where the ID/OOD distributions are known Figure 2. The mean of the OOD Gaussian Feature under the Gaussian (left), Laplace (middle) and Inverse Gaussian (right) OOD distributions for varying weights on the Information Bottleneck, $\alpha$ . For all plots, $\beta = 1 0$ . For the Gaussian case, $p ( z | 0 ) \sim \mathcal { N } ( - 0 . 5 , 0 . 5 )$ and $p ( z | 1 ) \sim \mathcal { N } ( 0 . 5 , 0 . 5 )$ . For the Laplace case, $p ( z | 0 ) \sim \mathcal { N } ( 0 , 0 . 6 6 )$ and $p ( z | 1 ) \sim L a p ( 0 , 1 )$ . In the Inverse Gaussian case, $p ( z | 0 ) \sim \mathcal { N } ( 0 , 0 . 6 6 )$ and $p ( z | 1 ) \sim I G ( d ( z ) ; 3 . 3 , 1 5 )$ . For visualization purpose, we only show the positive part. Figure 3. Plot of existing feature shaping functions from SoA methods: ReAct (Sun et al., 2021), VRA (Xu et al., 2023), FS-Opt (Zhao et al., 2024), and variants of ASH (Djurisic et al., 2022). $$ \begin{array} { r } { f ( z ) \left( \begin{array} { l } { \phantom { e } } \\ { \phantom { e } } \\ { y _ { 1 b } \qquad \cdots \qquad m _ { 1 } } \\ { \phantom { e } } \\ { y _ { 1 a } \qquad \cdots \qquad m _ { 2 } } \\ { \phantom { e } } \\ { y _ { 0 } \qquad z _ { 1 } \qquad z _ { 2 } \qquad \cdots \qquad z } \end{array} \right) \left( \begin{array} { l } { y _ { 0 } + \frac { y _ { 1 a } - y _ { 0 } } { z _ { 1 } } z \ , \qquad 0 \leq z < z _ { 1 } } \\ { y _ { 1 b } , \qquad \phantom { e } } \\ { y _ { 1 b } + m _ { 1 } ( z - z _ { 1 } ) , \ z _ { 1 } < z \leq z _ { 2 } } \\ { y _ { 1 b } + m _ { 1 } ( z _ { 2 } - z _ { 1 } ) + } \\ { m _ { 2 } ( z - z _ { 2 } ) , \qquad z > z _ { 2 } } \end{array} \right) } \end{array} $$ Figure 4. A piece-wise linear family of functions that approximately encompasses the mean value of our OOD feature shaping functions across OOD distributions examined in this paper. and unknown. In the latter case, we apply the piecewise family of feature shaping derived in the previous section (Figure 4). We assume that a validation set of ID and OOD data is available (as in existing literature) and the choices are given in our experiments section. A trained neural network is also provided. The network feature vectors and their ID/OOD label for the validation set are $\{ \mathbf { z } _ { i } , y _ { i } \}$ . Consistent with our simplifying assumptions and literature, each component of the network feature $\mathbf { z }$ is processed independently and for this paper, they will be processed by the same shaping function $\mu$ . Off-line-Training: Under the case that the forms of the ID and OOD distributions are known, the hyperparameters of the distributions are estimated from the validation set (rasterizing the vector data). Using the fitted distributions, Algorithm 1 is run to compute the optimal $\boldsymbol { \mu } ^ { * } , \boldsymbol { \sigma } _ { c } ^ { * }$ . In the case that the distributions are unknown, we assume that the feature shape fits the piecewise family in the previous section (i.e., the OOD distributions are one of Gaussian, Laplacian or IG). The hyper-parameters for the piecewise family are tuned by e.g., minimizing the false positive rate at true positive rate of $9 5 \%$ (FPR95) metric on the validation set - this gives the optimal shaping function $\mu ^ { * }$ . Online Operation: During operation, the network feature $\textbf { z }$ is extracted, and then shaped via the function $\tilde { \mathbf { z } } = \mu ^ { * } ( \mathbf { z } ) =$ $( \mu ^ { * } ( z _ { 1 } ) , \ldots , \mu ^ { * } ( z _ { n } ) )$ . Subsequently, $\tilde { \mathbf { z } }$ is input to a scoring function (e.g., in this paper, energy score (Liu et al., 2021)), which is then thresholded to produce the ID/OOD label. # 6. Experiments We validate our theory by comparing our new shaping function to SoA for OOD detection on standard benchmarks. Datasets and Model architectures. We experiment with ResNet-50(He et al., 2016), MobileNet-v2 (Sandler et al., 2018), vision transformers ViT-B-16 and ViT-L16 (Dosovitskiy et al., 2021) with ImageNet-1k (Russakovsky et al., 2015) as ID data, and benchmark on the OOD datasets/methods used in (Zhao et al., 2024). For the ImageNet benchmark, we evaluate performance across eight OOD datasets: Species (Hendrycks et al., 2022), iNaturalist (Horn et al., 2018), SUN (Xiao et al., 2010), Places (Zhou et al., 2018), OpenImage-O (Wang et al., 2022), ImageNetO (Hendrycks et al., 2021), Texture (Cimpoi et al., 2014), and MNIST (Deng, 2012). Moreover, we also experiment with CIFAR 10 and CIFAR 100 as ID data, for which we use a ViT-B-16 (Dosovitskiy et al., 2021) finetuned on CIFAR10/100 consistent with (Fort et al., 2021a), and a MLPMixer-Nano model trained on CIFAR10/100 from scratch. We evaluate eight OOD datasets: TinyImageNet (Torralba et al., 2008), SVHN (Netzer et al., 2011), Texture (Cimpoi et al., 2014), Places365 (Zhou et al., 2018), LSUN-Cropped (Yu et al., 2016), LSUN-Resized (Yu et al., 2016), iSUN (Xu et al., 2015), and CIFAR100/ CIFAR10 (CIFAR 100 treated as OOD for CIFAR 10, and vice-versa). We compare our results against the SoA methods across two categories - penultimate layer element-wise feature shaping approaches, which our theory currently applies to, and other methods. Penultimate layer feature shaping approaches involve element-wise feature shaping functions applied directly to the penultimate layer of the model before computing the energy score for OOD detection. Approaches in this category are: Energy (Liu et al., 2021), ReAct (Sun et al., 2021), BFAct (Kong & Li, 2023), VRAP (Xu et al., 2023) and FS-OPT (Zhao et al., 2024). The second category, which are not directly comparable to our approach because they may not involve feature shaping or additions to feature matching, but are included for reference, are softmax-based confidence scoring (MSP (Hendrycks & Gimpel, 2018b)), input perturbation and temperature scaling (ODIN (Liang et al., 2020)), intermediate-layer shaping and subsequent processing by following layers (ASH-P, ASHB, ASH-S (Djurisic et al., 2022)), or weight sparsification (DICE (Sun & Li, 2022)). As in ReAct (Sun et al., 2021), for ImageNet-1k benchmarks we use a validation set comprising the validation split of ImageNet-1k as ID data, and Gaussian noise images as OOD data, generated by sampling from $\mathcal { N } ( 0 , 1 )$ for each pixel location, to tune the hyperparameters of our piecewise linear activation shaping function. For CIFAR 10/100 benchmarks, following ODIN (Liang et al., 2020), we employ a random subset of the iSUN dataset (Xu et al., 2015) as validation OOD data for our hyperparameter tuning. As ID validation data for CIFAR10/100 we use the test splits of the corresponding datasets. The hyperparameters are optimized using Bayesian optimization (Frazier, 2018), by minimizing the FPR95 metric on the validation set. Resulting hyperparameters are reported in Appendix G. Metrics. We utilize two standard evaluation metrics, following (Sun et al., 2021; Zhao et al., 2024): FPR95 - the false positive rate when the true positive rate is $9 5 \%$ (abbreviated as FP), and the area under the ROC curve (AU). Results. The results on the ImageNet-1k benchmarks (Table 1) and the CIFAR 10/100 benchmarks (Table 2) demonstrate that our approach achieves state-of-the-art performance among comparable feature-shaping methods in the previously mentioned category of methods. Specifically, when compared to pointwise feature-shaping techniques such as ReAct, BFAct, VRA-P, and FS-OPT, our method consistently outperforms these approaches, yielding the best overall results in this category. While ASH variants marginally outperform our method in some cases, it is important to note that ASH employs a fundamentally different approach. ASH modifies activations through intermediate layer pruning and rescaling of features, which are then routed back into the network for further processing, and thereby is not an element-wise feature shaping approach. Our theory currently does not address this case. For the vision transformers ViT-B-16 and ViT-L-16, our method achieves the lowest FP among all competing methods, providing evidence of generalization across different architectures. Overall, our results demonstrate that our feature shaping is highly competitive with the latest SoA, along with providing a theoretical explanation. Computational Time: The inference cost of our feature shaping method is on the order of microseconds for a $2 5 6 \times 2 5 6 \times 3$ image, using PyTorch on an NVIDIA A100- 80GB GPU. This is comparable to other piecewise linear shaping approaches such as ReAct and VRA. Regularization as a Function of OOD Data. We study how IB regularization should be chosen with respect to properties of OOD data. This is important in practical scenarios. In particular, we conduct an experiment to suggest higher IB regularization is beneficial for “noisier” OOD datasets. To this end, we conduct a series of controlled experiments using ResNet-50 trained on the ImageNet-1k dataset. We aim to determine the structure of optimal feature-shaping functions as a function of noise. We apply additive Gaussian noise $\mathcal { N } ( 0 , \sigma )$ to the ImageNet-1k validation set and consider them as OOD data. The standard deviation $\sigma$ used are $\{ 2 5 , 5 0 , 1 0 0 , 1 5 0 , 2 5 5 \}$ to create 5 OOD datasets. Visualizations of this data and activation patterns are shown in Appendix F. We observe that this data closely resembles the high variance of activation patterns in OOD datasets as noted in (Sun et al., 2021), and so our noisy data serves to mimic OOD data with varying noise levels. By examining how the learned features adapt under different noise levels, we gain insights into the relationship between the OOD data and the IB regularization term for optimal shaping. In Figure 5, we plot the IB term of the obtained optimal shaping function optimized over hyperparameters at each noise level. Note we have used the Laplacian OOD distribution to estimate the IB term (Inverse Gaussian also results in similar results). It is seen that higher noise levels result in optimal shaping functions with lower IB values, which correspond to higher degree of regularization of the IB term in our loss functional. Thus, noisier OOD datasets require higher IB regularization for best OOD detection performance.
We present a theory for the construction of out-of-distribution (OOD) detection features for neural networks. We introduce random features for OOD through a novel information-theoretic loss functional consisting of two terms, the first based on the KL divergence separates resulting in-distribution (ID) and OOD feature distributions and the second term is the Information Bottleneck, which favors compressed features that retain the OOD information. We formulate a variational procedure to optimize the loss and obtain OOD features. Based on assumptions on OOD distributions, one can recover properties of existing OOD features, i.e., shaping functions. Furthermore, we show that our theory can predict a new shaping function that out-performs existing ones on OOD benchmarks. Our theory provides a general framework for constructing a variety of new features with clear explainability.
[ "cs.LG" ]
# 1 Introduction Leading reasoning models on math, science, and coding benchmarks learn to utilize chain-of-thought via reinforcement learning with verifiable rewards (RLVR) [1, 2, 3, 4]. These models are optimized to maximize verifiable rewards by comparing predicted final answers to ground truth. Models trained with RLVR are capable of surpassing previous approaches (such as supervised finetuning (SFT) or reinforcement learning from human feedback (RLHF)) on challenging math and science benchmarks due to availability of verifiable rewards at scale. Yet the drivers of these gains—and how they evolve with model scale—remain poorly understood. [5] attribute RLVR’s improvements almost entirely to the distillation of the base model’s existing knowledge. In this work, we instead formalize RLVR’s “improvement space” as the sum of two orthogonal effects—distillation and genuine capability gain—and investigate how each of these effects evolves as models scale. Specifically, there are at least two ways to improve a language model’s ability to solve challenging reasoning problems autonomously: 1. By distilling knowledge from pass ${ \ @ k }$ into pass $\ @ 1$ [6, 7, 8, 9, 10, 11] 2. Capability gain via RL in which a language model learns to solve new problems it previously was not able to solve even when given $k$ attempts. In this work, we propose a formalism to measure the extent to which learning during RLVR is driven by self-distillation or capability gain. We then seek to leverage these insights to accelerate learning of new problems during RLVR by incorporating guidance into the reasoning model’s context. We therefore address two main research questions: 1. Self-distillation or capability gain? To what extent is learning during RLVR merely redistributing probability mass among outputs the model already knows (“self-distillation”) versus genuinely expanding the model’s problem-solving capabilities? 2. Do guidance-conditioned trajectories on failure accelerate learning? If we give the policy selective guidance on complete problem failure, while requiring the trajectories to be generated by the same policy state (and therefore close to the on-policy distribution), can we close knowledge gaps faster than (a) using fully off-policy data, (b) providing no guidance at all, or (c) always providing guidance? # 2 Methods # 2.1 Self-Distillation vs. Capability Gain We study the post-training dynamics that govern LLMs learning to solve new tasks. We measure this ability as the rewards $\mathcal { R }$ acquired from an environment, such as the test set of a benchmark. Specifically, we are interested in how an LLM learns to solve new problems during RL. To this end, we define $\mathcal { R } ^ { n e t }$ as the sum of net new rewards acquired after RL for a policy $\pi _ { \mathrm { R L } }$ where $U ^ { \pi _ { \mathrm { i n i t } } }$ is the set of indices of unsolved problems prior to RL and $S ^ { \pi _ { \mathrm { i n i t } } }$ is the set of indices of solved problems prior to RL. We define solved and unsolved here via pass $\ @ 1$ correctness. Note that $\mathcal { R } ^ { n e t }$ can be calculated against both training data and test data and in practice is equal to the change in accuracy before and after training. Note that the progress term can be decomposed into problems that have at least one correct solution in a sample $\bar { \mathcal { V } } _ { i } = \left\{ \hat { y } _ { 1 } , \dotsc , \hat { y } _ { k } \right\}$ of $k$ responses from $\pi _ { \mathrm { i n i t } }$ to the same prompt (i.e. pass ${ \mathcal { Q } } k = 1 .$ ) and problems that have no correct solutions in the sample (i.e. pass $\ @ k = 0$ ). In order to understand how RLVR teaches models to solve new reasoning problems in practice, we set $k$ equal to the number of rollouts per problem used during training ( $k$ may be set higher and we define effective vs. absolute capability gain in Appendix $\ S _ { \ l } C _ { \ l }$ ). Decomposing progress into the above terms enables us to understand the mechanisms driving RLVR. We empirically analyze these components in Section 3.1 and find that while effective capability gain exists, progress is dominated by self-distillation. # 2.2 Guide: Accelerating learning with guidance on failure Inspired by our empirical results showing that self-distillation dominates learning of new problems during RLVR (see Figure 1), concurrent work showing similar results [5], and a rich history of success in RL of using off-policy data to improve training efficiency [12], we seek to increase the proportion of correct rollouts during RL. We hypothesize that a particularly effective means to do this will be by guiding the policy with a prompt-specifc hint, $h$ , such that the model is required to reach the solution in its own terms: $\pi _ { \theta } ( o _ { i , t } \mid q , h , o _ { i , < t } )$ . In an initial validation of this hypothesis, we find that including hints significantly improves pass $\boldsymbol { \ @ } \mathbf { k }$ , as shown in Figure 2. To this end, we derive a new class of online RL training algorithms which we call Guide. We describe the general form and a specialization to PPO in Appendix $\ S \mathrm { A }$ . Further, we carefully analyze a specialization of Guide to GRPO in which we (1) provide guidance on unsolved prompts and (2) apply an off-policy importance weight so that samples drawn with guidance still optimize performance without guidance, as shown in Algorithm 1. GRPO In typical RLVR with GRPO, for each question $q$ , we sample $k$ outputs $\{ o _ { i } \} _ { i = 1 } ^ { k }$ from the old policy $\pi _ { \theta _ { \mathrm { o l d } } } ( \cdot \mid q )$ and score them, yielding rewards $\{ r _ { i } \} _ { i = 1 } ^ { k }$ . We apply per-prompt $z$ -normalization and sets the token-level advantages $\hat { A } _ { i , t }$ for all tokens $t$ in each output $o _ { i }$ equal to the corresponding normalized reward: $$ \hat { A } _ { i , t } = \tilde { r } _ { i } = \frac { r _ { i } - \mu _ { r } } { \sigma _ { r } } , \qquad t = 1 , \ldots , | o _ { i } | . $$ The GRPO objective maximized during policy updates is defined as: $$ \mathcal { I } _ { \mathrm { G R P O } } ( \theta ) = \mathbb { E } _ { q \sim P ( Q ) , \{ \boldsymbol { o } _ { i } \} _ { i = 1 } ^ { k } \sim \pi _ { \theta _ { \mathrm { o d } } } ( o | q ) } [ \frac { 1 } { k } \sum _ { i = 1 } ^ { k } \frac { 1 } { | o _ { i } | } \sum _ { t = 1 } ^ { | o _ { i } | } \{ \operatorname* { m i n } [ \frac { \pi _ { \theta } \big ( o _ { i , t } | q , o _ { i , < t } \big ) } { \pi _ { \theta _ { \mathrm { o d } } } \big ( o _ { i , t } | q , o _ { i , < t } \big ) } \hat { A } _ { i , t } , $$ $$ \begin{array} { r } { \mathrm { c l i p } ( \frac { \pi _ { \theta } ( o _ { i , t } | q , o _ { i , < t } ) } { \pi _ { \theta _ { \mathrm { o l d } } } ( o _ { i , t } | q , o _ { i , < t } ) } , 1 - \varepsilon , 1 + \varepsilon ) \hat { A } _ { i , t } ] - \beta D _ { K L } [ \pi _ { \theta } \| \pi _ { \mathrm { r e f } } ] \Bigg \} ] , } \end{array} $$ where $\varepsilon$ and $\beta$ are hyperparameters controlling clipping and KL regularization, respectively. Guide We make the observation that because we want the model to perform well without guidance, the guided trajectories are off-policy. To avoid biasing the gradient, we should appropriately compute the importance weight (Sutton $\&$ Barto 1998). To this end, we modify the GRPO objective to $$ \mathcal { I } ( \theta ) = \mathbb { E } _ { q \sim P ( Q ) } \left[ \frac { 1 } { k } \sum _ { i = 1 } ^ { k } \frac { 1 } { | o _ { i } | } \sum _ { t = 1 } ^ { | o _ { i } | } \left\{ \operatorname* { m i n } _ { \mathrm { i n } } \left[ \operatorname* { m i n } _ { \mathrm { i n p o r t a l } \left( o _ { i , t } \mid q , o _ { i , < t } \right) \in \mathcal { A } \right] } \hat { A } _ { i , t } , \right. \right. $$ $$ \mathrm { c l i p } ( \frac { \pi _ { \theta } ( o _ { i , t } \mid x _ { i } , o _ { i , < t } ) } { \pi _ { \theta _ { \mathrm { o l d } } } ( o _ { i , t } \mid q , h , o _ { i , < t } ) } , 1 - \varepsilon , 1 + \varepsilon ) \hat { A } _ { i , t } ] - \beta D _ { \mathrm { K L } } \big [ \pi _ { \theta } \parallel \pi _ { \mathrm { r e f } } \big ] \} \mathrm { , } $$ where $h$ indicates some guidance (or hint) suffix appended to the prompt $q$ . We unify the two objective functions to form our objective JGuide $$ \mathcal { I } _ { \mathrm { G u i d e } } ( \theta ) = \mathbb { E } _ { q \sim P ( Q ) } \Bigg [ \frac { 1 } { k } \sum _ { r \in S ( q ) } \frac { 1 } { | r | } \sum _ { t = 1 } ^ { | r | } \Big \{ \operatorname* { m i n } \Big [ \underbrace { \pi _ { \theta _ { 0 } \mathrm { i } } \big ( r _ { t } \mid x _ { q } , r _ { < t } \big ) } _ { \pi _ { \theta _ { 0 } \mathrm { i } } \left( r _ { t } \mid s _ { q } , r _ { < t } \right) } \hat { A } _ { r , t } , $$ $$ \mathrm { c l i p } \left( \frac { \pi _ { \theta } ( r _ { t } \mid x _ { q } , r _ { < t } ) } { \pi _ { \theta _ { \mathrm { o l d } } } ( r _ { t } \mid s _ { q } , r _ { < t } ) } , 1 - \varepsilon , 1 + \varepsilon \right) \hat { A } _ { r , t } \Big ] - \beta D _ { K L } \big [ \pi _ { \theta } \mid \mid \pi _ { \mathrm { r e f } } \big ] \Big \} \Bigg ] , $$ where $ { \boldsymbol { S } } ( { \boldsymbol { q } } )$ is the set of $k$ sampled roll-outs for prompt $q$ , containing $k$ plain roll-outs $r \sim \pi _ { \theta _ { \mathrm { o l d } } } ( \cdot \mid x _ { q } )$ and, if all fail, $k$ guided roll-outs $r \sim \pi _ { \theta _ { \mathrm { o l d } } } ( \cdot \mid \tilde { x _ { q } } )$ , where $x _ { q }$ and ${ \tilde { x } } _ { q }$ are the plain and guided prompts respectively, $s _ { q } \in \{ x _ { q } , \tilde { x } _ { q } \}$ is the prompt used to generate rollout $r$ , $\hat { A } _ { r , t }$ is the group-normalized advantage at token $t$ , and $\varepsilon , \beta$ are the PPO-style clipping and KL-regularization hyperparameters. 3 Algorithm 1 Guide-GRPO: Group Relative Policy Optimization with guidance-augmented rollouts on failure Input: initial policy $\pi _ { \theta _ { \mathrm { i n i t } } }$ ; reward model $r _ { \varphi }$ ; task prompts $\mathcal { D }$ ; hyper-parameters $\varepsilon$ , $\beta$ , $\mu$ , $k$ roll-outs per prompt, guidance suffix guid 1: πθ ← πθinit 2: for $i t e r = 1 , \dots , I$ do 3: $\pi _ { \mathrm { r e f } } \pi _ { \theta }$ ▷ freeze reference 4: for $s t e p = 1 , \ldots , M$ do 5: Sample minibatch $\mathcal { D } _ { b } \subset \mathcal { D }$ 6: $\pi _ { \theta _ { \mathrm { o l d } } } \pi _ { \theta }$ ▷ snapshot old policy 7: Sample $K$ outputs $\{ o _ { i } \} _ { i = 1 } ^ { K } \sim \pi _ { \theta _ { \mathrm { o l d } } } ( { \bf \cdot } | q )$ for every $q \in \mathcal { D } _ { b }$ 8: Identify unsolved set $U = \{ q \in { \mathcal { D } } _ { b } : a l _ { i }$ l $k$ roll-outs fail 9: for $q \in U$ do 10: Sample $k$ guidance rollouts $\tilde { o } \sim \pi _ { \theta _ { \mathrm { o l d } } } ( \cdot | \langle q , \mathtt { g u i d } \rangle )$ 1 11: end for 12: Compute rewards $r _ { i } = r _ { \varphi } ( o _ { i } )$ (and $r _ { \tilde { o } }$ if present) 13: Compute advantages $\hat { A } _ { i , t }$ via group-relative estimation 14: for $g s t e p = 1 , \ldots , \mu$ do 15: Update $\pi _ { \boldsymbol { \theta } }$ by maximising the Guide objective in Eq. 6 16: end for 17: end for 18: end for 19: return $\pi _ { \boldsymbol { \theta } }$ Output: fine-tuned policy $\pi _ { \boldsymbol { \theta } }$ Guide injects hints only when all unguided roll-outs fail, and an importance weight projects those off-policy trajectories onto the on-policy gradient direction. This focuses learning signal on the hardest unsolved problems while keeping every guided update aligned with the plain-prompt objective, thereby achieving faster progress than vanilla GRPO. We formalize this notion into the following theorem and provide a proof in Appendix $\ S \mathbf { B }$ : Theorem 1 (Guide-GRPO improves learning efficiency) With selective guidance and importance weighting, the one-step expected improvement satisfies $$ \mathbb { E } \big [ \Delta \mathcal { R } _ { \mathrm { G u i d e } } \big ] = \eta \sum _ { q \in U } \big [ A _ { q } p _ { q } ^ { 2 } + ( 1 - p _ { q } ) ^ { k } \mathbb { E } [ \tilde { A } _ { q } ] p _ { q } \big ] > \eta \sum _ { q \in U } A _ { q } p _ { q } ^ { 2 } = \mathbb { E } \big [ \Delta \mathcal { R } _ { \mathrm { V a n i l a } } \big ] , $$ for sufficiently small $\eta$ . where $p _ { q }$ is the plain success probability, ${ \tilde { A } } _ { q }$ the guided advantage, $A _ { q }$ the plain advantage, $k$ the number of roll-outs, and $U$ the set of unsolved prompts. Note that Guide’s relative gain over vanilla GRPO increases when • failure probability $( 1 - p _ { q } ) ^ { k }$ is large (hard prompts), • guided advantage $\mathbb { E } [ \tilde { A } _ { q } ]$ is large on average relative to the full rollout group, • the success probability under the unguided policy $p _ { q }$ is non-zero (so credit can propagate). 3We further re-weight importance ratios with an adaptive policy-reshaping detailed in Appendix $\ S _ { \mathrm { E } }$ . # 3 Experiments # 3.1 RLVR Drives Learning Progress Mainly via Self-Distillation We investigate the mechanisms driving performance improvements in models trained using RLVR, explicitly decomposing the observed improvements into two measurable effects: capability gains and self-distillation. Concretely, for the experiments in this section, we define capability gain and self-distillation as follows: Capability gain The count of problems that are initially unsolved by the untrained policy, even with multiple attempts pass@16, which subsequently become solvable by the RLVR-trained policy within a single sample (pass@1). 4 Self-distillation The count of problems solvable by the untrained policy with multiple sampling attempts (pass@16) that later become solvable with just one attempt (pass@1) during RLVR training. Figure 1: Capability gain (left), self-distillation (middle), and combined progress (capability gain $+$ self-distillation; right) across training steps on all test sets. # 3.1.1 Experimental setup For our base models, we use Qwen 2.5 [13] at five model scales, 0.5B, 3B 7B, 32B, and 72B as the starting untrained policies. Each run is trained for 256 steps using the GRPO training objective on a dataset composed of math, stem, and coding tasks. We evaluate every 16 training steps on the following benchmarks: GSM8K [14], MATH500 [15], AIME24 [16], AIME25 [17], AMC23 [18], GPQA-DIAMOND [19], OLYMPIADBENCH [20], LEETCODE [21], LIVECODEBENCH [22], and HUMANEVAL [23]. To measure variance in capability gain and self-distillation across runs (as defined in 3.1), we perform 10 independent trials, each with its own random seed. We first generate 100 rollouts at temperatures 1.0 and 0.0 for every problem in the full test set. Then for each trial, to compute pass $\ @ 1$ , we randomly sampling one of the 100 temperature 0.0 rollouts and judge its correctness; to compute pass $\textcircled { a } 1 6$ , we randomly sample 16 trajectories from the temperature 1.0 rollouts and judge if any of the sample are correct. We apply this sampling procedure independently across the 10 trials and aggregate results to report the overall mean and standard error of capability gain, distillation, and progress counts on the full test set. Additional training hyper-parameters and implementation details are provided in Appendix $\ S \ H$ . # 3.1.2 Analysis Figure 1 decomposes net performance gains (Eq. 2) into capability gain and self-distillation. We make the following observations: Self-distillation dominates Across all four Qwen sizes, the majority of the progress improvement comes from converting answers that were already reachable within $\leq 1 6$ untrained samples into the trained pass $\ @ 1$ at temp 0. Among the models evaluated, Qwen 7B and Qwen 3B shows the highest gain via self-distillation, whereas the larger models (Qwen 32B and Qwen 72B) showed comparatively fewer gains from self-distillation. In contrast, every model learns to solve some problems it could not solve at initialization, with the 0.5B model gaining the most in relative terms. Nevertheless, capability gain remains a minority contributor at every scale, indicating that RLVR primarily re-allocates probability mass rather than discover truly novel solutions at the studied $k$ . Figure 2: Impacts of guidance on correct rollouts. Left: Guidance vs. no-guidance pass $@ \mathbf { k }$ performance on Qwen-2.5-Math-7B on 10K randomly sampled training examples from open-r1/OpenR1- Math-220k [24]. Including problem-specific guidance into the context increases unbiased pass $@ k$ . Middle: Guided rollouts solve more previously unsolvable questions (capability gain), with gains growing in k. Right: Guidance also improves performance on the distillation subset in comparison to unguided model. Headroom dictates returns and shrinks with capability We first note that the unsolved set $| U |$ contracts sharply as model size grows: 0.5B begins with 3195 unsolved items, 3B with 2150, 7B with 1913, 32B with 1617, and 72B with just 1532. Because each model converts a similar fraction of its own $| U |$ $( \approx 2 5 \% )$ , the absolute count of pass $\ @ 1$ lift (Figure 1; right) inevitably drops at larger scale. Progress for stronger models therefore hinges on introducing harder examples that replenish $| U |$ and expose new reasoning gaps. # 3.2 Guide-GRPO towards mathematical reasoning Leveraging the observation that the majority of the performance gain in RLVR training is from self-distillation, we seek to increase the proportion of correct trajectories during RL training while remaining close to the policy’s sampling distribution. In this section, we first validate the hypothesis that prompt-specific guidance in the model’s context improves pass $@ \mathbf { k }$ (Figure 2), and then utilize this improvement to empirically demonstrate Guide-GRPO’s (Algorithm 1) effectiveness towards improving mathematical reasoning for language policy models (Table 1 and Table 2). # 3.2.1 Experimental Setup Our training data consists of the default subset of OpenR1-Math-220k [24], comprising 93.7K math reasoning tasks. For each entry, we extract the prompt, ground-truth answer, and the human-authored reference solution. For guidance generation, we prompt GPT-4o to produce pedagogically-inspired hints that mimic expert tutoring strategies – providing high-level conceptual direction and problemsolving frameworks without revealing solution paths (full instructions and guidance examples are in Appendix $\ S \mathbf { G }$ ). Our base model, Qwen-2.5-Math-7b [13], is a large language model pre-trained and fine-tuned for complex mathematical reasoning. We establish a comprehensive comparative framework: (1) standard GRPO training, (2) GRPO with Filtering – a technique shown to improve training efficiency by discarding prompts for which the rollouts are all incorrect or all correct [25], (3) our proposed GuideGRPO approach, and (4) a supervised fine-tuning (SFT) baseline trained directly on human-authored solutions. This multi-faceted comparison allows us to evaluate whether transforming expert solutions into guided hints yields performance advantages over both direct imitation of expert reasoning and optimized reinforcement learning approaches. Moreover, to assess the robustness of our method to increasing computational resources, we conduct experiments that increase context length (4K $ 8 \mathrm { K }$ ) followed by an increase in context length and model size $\mathrm { 7 B } 3 2 \mathrm { B }$ ). Additional training hyper-parameters and implementation details are provided in Appendix $\ S \mathrm { I }$ . # 3.2.2 Results Table 1: Comparison of $\mathrm { P a s s } @ 1$ (greedy decoding) and $\mathrm { P a s s } @ 1 6$ (temperature 1.0) performance on several math benchmarks across different training algorithms. SFT is trained on the reference solution, Filter-GRPO uses the standard GRPO objective with filtering of all incorrect and all correct groups, GRPO is without filtering, Base is the base model (Qwen-2.5-Math-7B), and Guide-GRPO is our method. The performance for Pass $\ @ 1$ is averaged over 5 independent samples. Table 4 contains the full results with $9 5 \%$ confidence intervals. Bold values indicate best performance. Task-specific guidance increases correct rollouts Figure 2 demonstrates that introducing targeted, in-context guidance significantly increases the number of correct rollouts. We then dissect how these hints affect both capability gain and self-distillation (see Section 3.1). Our analysis reveals that guidance not only helps the model solve previously unreachable prompts (capability gain) but also reinforces consistency on already-solvable ones (self-distillation). Building on this insight, we apply Guide-GRPO (see Section 3.2) to transfer performance improvements observed under guidance to directly improve the base policy. More details about this experimental setup are described in Appendix I. Guide-GRPO leads to better test-time performance As shown in Table 1, Guide-GRPO consistently outperforms all baselines across both pass $\ @ 1$ and pass $@ 1 6$ metrics on a wide range of math benchmarks. Notably, Guide-GRPO achieves a $3 \%$ absolute improvement in pass $\ @ 1$ on Olympiadlevel questions and a $13 \%$ improvement in pass $@ 1 6$ on AIME 25, relative to the next best performing baseline. On aggregate, it achieves the highest macro-average (51.03 pass $\ @ 1$ , 70.15 pass $@ 1 6$ ) and micro-average (70.66 pass $\ @ 1$ , 83.29 pass $@ 1 6$ ) scores, highlighting robust gains across both balanced and volume-weighted evaluations. These results demonstrate that Guide-GRPO is effective at integrating prompt-specific guidance into the training process, enabling the resulting policy to generalize better to difficult mathematical reasoning task – even without access to guidance at test time. Additionally, its strong pass $@ \mathbf { k }$ performance, combined with our observation that RLVR primarily drives progress through selfdistillation, suggests that Guide-GRPO promotes better exploration and solution diversity, which are key to continued improvement in reasoning-centric domains. Guide-GRPO improvements scale with context length and model size The results in Table 2 demonstrates Guide-GRPO’s consistent improvements over vanilla GRPO when scaling to larger context lengths $4 \mathrm { K } 8 \mathrm { K }$ ) and model sizes $\mathrm { 7 B } 3 2 \mathrm { B }$ ). For the 32B model, Guide-GRPO achieves 3.39 percentage point improvement in macro-average $\mathrm { P a s s } @ 1$ $56 . 2 6 \%$ vs $5 2 . 8 7 \% \% \$ and 1.89 percentage point improvement in micro-average Pass $\ @ 1$ ( $7 6 . 3 6 \%$ vs $7 4 . 4 7 \%$ ). More generally, the improvements are consistent across both $\mathrm { P a s s } @ 1$ and Pass $@ 1 6$ 6 metrics, with Guide-GRPO showing gains ranging from 1-4 percentage points across all configurations. These results strengthen the empirical evidence that Guide-GRPO’s test-time generalization scales effectively with increased computational resources along both context length and parameter count dimensions. Table 2: Comparison of $\mathrm { P a s s } @ 1$ (greedy decoding) and $\mathrm { P a s s } @ 1 6$ (temperature 1.0) performance on several math benchmarks with larger context length (8K) across model sizes (7B and 32B). The performance for $\mathrm { P a s s } @ 1$ is averaged over 5 independent samples. Table 5 contains the full results with $9 5 \%$ confidence intervals. Bold values indicate best performance. Guide-GRPO demonstrates better train-time metrics Figure 3 reveals an interesting training trajectory for Guide-GRPO. While initially exhibiting lower rollout accuracy without guidance, Guide-GRPO ultimately surpasses standard GRPO methods as training progresses. This performance crossover indicates our selective guidance injection approach effectively updates policy weights, enabling the model to perform better independently without requiring guidance at inference time. Notably, Guide-GRPO maintains consistently higher entropy throughout training while steadily increasing response length. This combination of enhanced entropy and improved performance, both during training and testing, suggests that Guide-GRPO preserves exploratory capacity for novel solutions while achieving superior results across diverse mathematical reasoning tasks. Training dynamics reveal critical convergence factors for Guide-GRPO Our investigation into various policy loss formulations uncovered specific configurations that lead to consistent training instability. Figure 4 in Appendix $\ S _ { \mathrm { { D } } }$ illustrates the reward trajectories across different settings, highlighting two critical factors affecting convergence: • Importance weighting relative to guided distribution – Constructing importance weights for guided trajectories relative to old policy weights conditioned solely on the prompt introduces significant training instability. Since the sampled trajectories originate from the old policy conditioned on both prompt and guidance—rather than just the prompt—the resulting probability ratios between current and old policy weights misrepresent the true gradient direction along the sampled trajectory, leading to suboptimal updates. A theoretical support is detailed in Appendix $\ S \mathbf { B }$ . • PPO-Clip mechanism destabilizes guided trajectories – When incorporating guided trajectories with importance weighting relative to the sampling distribution, we observe that PPO-clipping causes training divergence at approximately 50 steps. This phenomenon aligns with theoretical expectations: guided trajectories inherently generate smaller probability ratios, causing the minimum clip operation to artificially inflate most token probability ratios, thereby triggering unstable gradient updates. We mitigated this issue by removing ratio clipping, which empirically produced stable training outcomes. Figure 3: Comparison of Guide-GRPO with baseline methods across training steps (400 total). Left: Rollout accuracy without guidance shows Guide-GRPO ultimately outperforming baselines despite lower initial performance. Middle: Generation entropy remains consistently higher for Guide-GRPO, indicating better solution diversity. Right: Response token length increases for Guide-GRPO in later training stages. Shaded regions represent confidence intervals. Threshold for guidance Our ablation across three guidance thresholds (All Incorrect, Mostly Incorrect, and Always) reveals optimal performance when guidance is applied only when all standard rollouts fail, as shown in Table 3. While "Mostly Incorrect" performs comparably, unconditional guidance significantly impairs results. Excessive guidance handicaps learning by preventing the model from developing robust reasoning. Conversely, strategic guidance only for entirely incorrect samples provides essential signal when the model’s sampling distribution completely misses valid solutions, providing exposure to guided solution traces to problems beyond the current policy’s capability while still incentivizing independent exploration in all other cases. Table 3: Performance comparison of various guidance threshold strategies. The "All Incorrect" strategy applies guidance only when all original prompt rollouts fail; "Mostly Incorrect" applies guidance when accuracy falls below $2 5 \%$ ; and "Always" unconditionally applies guidance to all rollouts. Bold values indicate best performance. # 4 Related Work Reinforcement Learning for LLM Reasoning Recent advances in reinforcement learning approaches [26, 27, 28, 29, 30] have demonstrated remarkable progress in enhancing LLMs’ reasoning capabilities. OpenAI-o1 [1] and DeepSeek-R1 [2] have generated state-of-the-art results in complex reasoning tasks such as in math, coding, etc. by pioneering the use of Reinforcement Learning from Verifiable Rewards (RLVR) [3, 31, 2, 1], in which the reward is computed using a rule-based verification function [32, 33, 34]. Previous works have shown that models trained with RLVR have surpassed those trained with previous approaches (such as supervised finetuning (SFT) or reinforcement learning from human feedback (RLHF)) in terms of generalization capacity and performance [35, 36]. Some works also provide frameworks for the distillation of knowledge from pass $@ k$ into pass $\ @ 1$ via expert iteration [6, 7, 8, 9, 10, 11] to improve a language model’s ability to solve challenging reasoning problems autonomously. Learning Mechanisms for Reinforcement from Verifiable Rewards Building upon the increasing traction of RLVR in the reasoning space, some works have addressed the fundamental dynamics of improvements seen from RLVR [5, 37, 38], claiming that RLVR boosts sampling efficiency by biasing the model’s output distribution toward paths that are more likely to yield rewards but reduces the overall reasoning capacity boundary at very high $k$ $( = 2 5 6 )$ [5]. We corroborate the results regarding sampling efficiency in our work as well, revealing that while capability gain exists in the lower $k$ range $( < = 1 6$ ) across multiple domains (math, coding, STEM, etc.) and model scales, learning to solve new problems via RLVR is dominated by self-distillation of pass $@ \mathbf { k }$ performance into pass $\ @ 1$ performance. As a result, we also approach methods to induce capability gain and thus propose the technique of Guide to adaptively incorporate hints on failure to surpass gains made just by self-distillation. Reinforcement Learning for LLMs with Off-Policy Data A variety of off-policy reinforcement learning techniques such as DPO [39] and variants of on-policy algorithms like Tapered Off-Policy REINFORCE [40] have been applied to LLMs recently. Off-policy methods yield the advantage of enabling better sample efficiency by learning from experiences collected by different policies, but at the cost of potential increased instability. One recent work [41] targets integrating high-quality off-policy trajectories with policy shaping via regularized importance sampling. In contrast, our method leverages guidance in context, which we hypothesize has the potential to bridge the gap in benefits between on-policy and off-policy learning than fully off-policy incorporation and can be applied to training settings in which no more power teacher model exists to distill from. Thus we focused on studying how to improve model performance independent of directly distilling from a much stronger model such as R1. # 5 Future Work While Guide-GRPO demonstrates strong empirical and theoretical performance on mathematical reasoning, several directions remain open. First, future work should more deeply investigate the effect of the quality and nature of guidance on model progress during RL. Future methods may explore more adaptive or personalized guidance strategies that evolve based on policy progress or target specific reasoning failures. Extending Guide to other domains such as code generation, agents, or even robotics could test its generality. In these initial experiments we have only evaluated Guide on models at 7B-parameter scale and at context lengths up to 4k due to compute limitations. Scaling studies are needed to understand how the effectiveness of Guide varies with model size, context length, and compute scale.
We study the process through which reasoning models trained with reinforcement learning on verifiable rewards (RLVR) can learn to solve new problems. We find that RLVR drives performance through two main means: (1) by compressing pass@$k$ into pass@1 and (2) via "capability gain" in which models learn to solve new problems that they previously could not solve even at high $k$. We find that while capability gain exists across model scales, learning to solve new problems is primarily driven through self-distillation. We demonstrate these findings across model scales ranging from 0.5B to 72B on >500,000 reasoning problems with prompts and verifiable final answers across math, science, and code domains. We further show that we can significantly improve pass@$k$ rates by leveraging natural language guidance for the model to consider within context while still requiring the model to derive a solution chain from scratch. Based of these insights, we derive $\text{Guide}$ - a new class of online training algorithms. $\text{Guide}$ adaptively incorporates hints into the model's context on problems for which all rollouts were initially incorrect and adjusts the importance sampling ratio for the "off-policy" trajectories in order to optimize the policy for contexts in which the hints are no longer present. We describe variants of $\text{Guide}$ for GRPO and PPO and empirically show that Guide-GRPO on 7B and 32B parameter models improves generalization over its vanilla counterpart with up to 4$\%$ macro-average improvement across math benchmarks. We include careful ablations to analyze $\text{Guide}$'s components and theoretically analyze Guide's learning efficiency.
[ "cs.LG", "cs.AI", "cs.CL" ]
# 1. Introduction Large language models (LLMs) (Achiam et al., 2023; Abdin et al., 2024; Yang et al., 2024) have demonstrated impressive capabilities across diverse tasks such as mathematics (Zhang et al., 2024b;a; Yue et al., 2024), coding (Nam et al., 2024; Chew et al., 2023; Kim et al., 2024a), and reasoning (Hao et al., 2023; Yuan et al., 2024a; Zheng et al., 2023). Despite these achievements, as LLM-driven systems increasingly integrate into societal roles, serving as AI companions (Xu et al., 2024b;b; Chen et al., 2024; Ploderer et al., 2025) or AI employees (Brachman et al., 2025; McDuff et al., 2025; Singhal et al., 2025; Dillion et al., 2025), they must possess robust learning capabilities. Effective learning is essential for AI agents to adapt dynamically to diverse and changing environments, continuously acquire new knowledge, and autonomously respond to novel contexts. However, research on the learning ability of LLMs remains scarce, with little work systematically investigating how well LLMs can acquire and generalize new knowledge across tasks. To address this gap, we aim to systematically investigate the learning ability of LLMs. Drawing insights from cognitive science and educational theory (Fodor, 1983; Kolb, 1984; Clark et al., 2012), we first analogize from human learning processes to identify three fundamental dimensions of learning: (1) learning from instructor, where the model acquires knowledge through guided interaction (Cole et al., 2012; Sheffield et al., 2018); (2) learning from concept, where the model internalizes structured abstractions and generalizes them to downstream tasks (Minsky, 1961; Beckmann et al., 2023); and (3) learning from experience, where the model adapts based on accumulated trajectories or explorationfeedback (Kolb, 1984; Zhao et al., 2024). For each dimension, we design targeted experimental paradigms to operationalize the corresponding learning mechanism. In Learning from Instructor, we simulate tutor-learner settings with and without interactive clarification, demonstrating that interaction within instructor and learner consistently improves model’s leaning ability. In Learning from Concept, we evaluate the impact of injecting abstract conceptual knowledge in competitive environments (i.e., TextArena (Guertler et al., 2025)), showing that (i) conceptual understanding is scaleemergent, and (ii) injecting structured domain knowledge can provide a tangible advantage, if the model is sufficiently capable of internalizing it. In Learning from Experience, it is a crucial capability for adapting to novel environments and acquiring new knowledge autonomously. While they are few-shot learners, they struggle in many-shot settings due to the challenges of long-context integration. This highlights the importance of a unified benchmark that can evaluate LLMs’ general learning abilities across cognitive dimensions. Building on this framework and empirical insights, we consolidate our findings into a unified benchmark, LearnArena, that reflects realistic and cognitively grounded learning scenarios. It enables principled evaluation of LLMs’ learning behavior across three learning aspects, and provides a foundation for advancing learning capabilities. Empirical results reveal that learning ability benefits from increased capacity, but faces a bottleneck; architectural and training advancements play a crucial role in further enhancing learning capability. Our contributions are as follows: (1) We present the first work to explicitly evaluate and analyze the general learning ability of LLMs. Grounded in cognitive science, we propose a principled decomposition into three dimensions: learning from instructor, learning from concept, and learning from experience, each with dedicated methodologies. (2) We conduct a comprehensive empirical study across the three learning dimensions, revealing three key insights: interaction improves learning in instructor-based settings; conceptual understanding is scale-emergent and benefits larger models; and LLMs are effective few-shot learners but not many-shot learners. (3) Based on our framework and findings, we introduce a benchmark, LearnArena, that offers a unified and realistic evaluation of LLMs’ general learning ability across three cognitive dimensions. It enables diagnostic insights and supports the development of more adaptive and human-like models. # 2. Related Work Evaluation of Large Language Models. LLMs have been extensively evaluated on tasks involving linguistic competence (Srivastava et al., 2022; Chen et al., 2023; Kim et al., 2024b), factual recall (Geva et al., 2023; Wang et al., 2023), reasoning (Hao et al., 2023; Yuan et al., 2024a; Zheng et al., 2023), instruction following (Hu et al., 2024c;d;b), and multitask generalization (Muennighoff et al., 2023; Cohen et al., 2025; Yin et al., 2024), with benchmarks such as MMLU (Hendrycks et al., 2020b), BIG-Bench (Aarohi Srivastava, 2022), and HELM (Liang et al., 2023) highlighting broad capabilities and emergent scaling trends. However, most evaluations focus on static zero- or few-shot performance, offering limited insight into how models learn or generalize from experience. To address this, recent studies have begun shifting toward more dynamic formulations that probe the learning behavior of LLMs. (Dong et al., 2022) categorize in-context learning (ICL) as inference-time adaptation and identify key factors for success. (Min et al., 2022) show that ICL performance depends more on input format than label correctness. (Agarwal et al., 2024) find that manyshot ICL yields diminishing returns as context length grows. (Schick et al., 2023) introduce Toolformer, enabling models to self-supervise API use. (Yuan et al., 2024b) develop models that improve via internally generated feedback. Nevertheless, most such studies are piecemeal, lacking a unified theoretical lens or systematic analysis for comparing learning processes across tasks. Our work addresses this gap by introducing a cognitively grounded decomposition to systematically analyze learning ability. Learning Ability of Large Language Models. Recent advances in large language models have demonstrated remarkable capabilities in adapting and generalizing knowledge beyond traditional static evaluations (Brown et al., 2020; Wei et al., 2022; Tan et al., 2024). Cognitive psychology and educational theory have long emphasized that effective learning involves not only direct instruction and guided feedback (Clark et al., 2012; Cole et al., 2012), but also the ability to abstract conceptual structures and adapt from experience (Kolb, 1984; Minsky, 1961; Fodor, 1983). Motivated by these insights, we adopt a tripartite perspective, learning from instructor, concept, and experience, that mirrors rapid-instruction-to-implementation learning (RITL) in humans (Sheffield et al., 2018), classical rule-based accounts of cognition (Minsky, 1961; Fodor, 1983), and experiential learning theory (Kolb, 1984). While contemporary LLM benchmarks extensively measure task-specific performance (Aarohi Srivastava, 2022; Liang et al., 2023; Hendrycks et al., 2020b; Meva & Kukadiya, 2025), they rarely probe how models internalize instructions, extract explicit rules, or accumulate and reuse episodic experience, capacities increasingly highlighted by self-evolving or selfrefinement agents (Gao et al., 2024; Hu et al., 2024a). This gap underscores the need for a cognitively informed evaluation framework that systematically characterizes the general learning abilities of LLMs across these complementary dimensions. # 3. A Cognitive Framework for Analyzing Learning Abilities in LLMs To systematically investigate the general learning capabilities of LLMs, we propose a cognitively grounded framework comprising three paradigms: Learning from Instructor (LfI), Learning from Concept (LfC), and Learning from $E x .$ perience $( L f E )$ . LfI captures learning via explicit guidance, supported by evidence that instruction accelerates task acquisition and reduces cognitive load (Cole et al., 2012; Sheffield et al., 2018; Clark et al., 2012; Siregar, 2021). LfC reflects structured abstraction from static concepts, grounded in symbolic cognitive theories (Minsky, 1961; Fodor, 1983; Hu et al., 2024a) despite critiques of their rigidity (Beckmann et al., 2023). LfE models adaptation through feedback and interaction, consistent with experiential learning theory (Kolb, 1984; Huerta-Wong & Schoech, 2010) and recent work showing LLMs improve via self-refinement (Zhao et al., 2024; Gao et al., 2024). Figure 1. Overview of our proposed cognitive framework for evaluating general learning abilities in LLMs. We decompose learning into three core types: (a) Learning from Instructor; (b) Learning from Concept; and (c) Learning from Experience. Formally, we define a learning task $t$ as a tuple $( \mathcal { X } _ { t } , \mathcal { Y } _ { t } , \mathcal { C } _ { t } )$ , where $\textstyle { \mathcal { X } } _ { t }$ denotes the input space, $\mathcal { { D } } _ { t }$ the output space, and $\mathcal { C } _ { t }$ the context space, which characterizes the type of information the model must incorporate to generate correct outputs. The learning objective is to establish a predictive mapping: $$ f _ { \theta } : \mathcal { X } _ { t } \times \mathcal { C } _ { t } \to \mathcal { Y } _ { t } $$ where the form of $\mathcal { C } _ { t }$ is determined by the learning paradigm. In Learning from Instructor (LfI), the model acquires task knowledge through explicit interactions with an external instructor, which may involve demonstrations, explanations, or corrections. These interactions form a communicationbased supervision channel, where the instructor incrementally shapes the model’s understanding of the task. Formally, we define the context as ${ \mathcal { C } } _ { t } = { \mathcal { T } } _ { t }$ , where $\mathcal { T } _ { t }$ encodes structured instructional signals such as natural language directives, step-by-step exemplars, or annotated feedback. The model learns to integrate these interactive signals into its prediction process: $f _ { \theta } ( x , \mathcal { T } _ { t } ) \hat { y }$ . In Learning from Concept (LfC), the model receives static, abstract conceptual knowledge that captures abstract properties or domain principles. Unlike instructor signals that evolve through interaction, concepts are predefined and noninteractive, such as definitions, category structures, or rule schemata. We define the context as $\mathcal { C } _ { t } = \mathcal { K } _ { t }$ , where $\textstyle { \mathcal { K } } _ { t }$ represents a set of symbolic or linguistic descriptions encoding domain-relevant concepts. The model is expected to internalize these abstractions and apply them to generate predictions consistent with conceptual constraints: $f _ { \theta } ( x , K _ { t } ) \hat { y }$ , subject to $\hat { y }$ being semantically aligned with $\textstyle { \mathcal { K } } _ { t }$ . In Learning from Experience (LfE), the model accumulates and utilizes its own prior interaction history to adapt future behavior. Unlike LfC, where supervision is static and predefined, LfI and LfE both involve dynamic supervision. In LfI, the model receives guidance through interactive instructions, while in LfE, supervision emerges from sequences of interactional feedback, potentially including successful or failed trajectories, user preferences, or latent task rewards. The context is defined as ${ \mathcal { C } } _ { t } = \tau _ { t } = \{ s _ { 1 } , s _ { 2 } , . . . , s _ { k } \}$ , where each $s _ { i }$ represents a structured snapshot from a past interaction, such as inputoutput pairs, action-state transitions, or dialog turns. The model is expected to generalize from this accumulated experience to improve decision-making: $f _ { \theta } ( x , \tau _ { t } ) \hat { y }$ . These three paradigms offer a unified and cognitively motivated framework for evaluating how LLMs acquire, organize, and apply knowledge. LfI emphasizes guided learning via instructor interaction; LfC emphasizes structural abstraction from fixed knowledge; and LfE emphasizes adaptation through situated experience. This decomposition allows us to probe distinct facets of model generalization and align LLM evaluation with core dimensions of human learning. # 4. Learning from Instructor (LfI) We investigate Learning from Instructor (LfI), where models acquire task knowledge via structured guidance, such as demonstrations, explanations, or feedback. We evaluate this setting into two dimensions: (1) Passive Consumption vs. Interactive Clarification, and (2) Scaling Learner. # 4.1. Passive Consumption vs. Interactive Clarification Experiment Setup. To examine the impact of interactivity in instruction-based learning, we adopt the MagpieMath (Xu et al., 2024a) dataset, a high-quality math dataset generated autoregressively by Qwen2.5-Math-72B. An instructor model is first trained on this dataset and subsequently used to teach a separate learner model under two learning paradigms: (1) Passive Consumption, in which the learner receives only direct solutions from the instructor without any further interaction, and (2) Interactive Clarification, where the learner is allowed to ask clarification questions following each instructor response, and the instructor provides targeted, follow-up explanations. Figure 2. Comparison of learner performance under Passive Consumption and Interactive Clarification paradigms across eight mathematical benchmarks. In this setting, the learner is restricted to a single clarification question per sample to ensure consistency across training examples. Both the initial answers and the clarification responses are aggregated as supervision for the learner. We experiment with four representative LLM families: LLaMA3.1-8B, Qwen2.5-7B, Mistral-7B, and Phi3-mini—each used as both instructor and learner to test robustness across pairings. Learners are trained on outputs from matched instructors within each setting. Evaluation is conducted on eight diverse mathematical benchmarks: GSM8K (Cobbe et al., 2021), SVAMP (Patel et al., 2021), MATH (Hendrycks et al., 2021b), NumGLUE (Mishra et al., 2022), SimulEq (Koncel-Kedziorski et al., 2016), AQuA (Ling et al., 2017), SAT (Zhong et al., 2023), and MMLU-Math (Hendrycks et al., 2020a), covering competencies from basic arithmetic to multi-step symbolic reasoning and standardized test preparation. Implementation and generation details are provided in Appendix A.3. Main Result. As shown in Figure 2, learners trained under the Interactive Clarification paradigm consistently outperform those trained via Passive Consumption across all eight evaluation benchmarks. This performance gain highlights that LLMs are capable of leveraging interactive feedback to improve task understanding, resembling human-like active learning behaviors. Notably, the gains vary across model families. Mid-sized models such as Mistral-7B and Qwen2.5-7B show substantial improvements, suggesting a strong capacity to benefit from additional instructional signals. In contrast, the smallest model, Phi-3-mini, shows only marginal improvement, indicating limited ability to engage in or benefit from interactive clarification, which suggest that active learning capabilities are not uniform across models and may depend on model capacity or architecture. # 4.2. Scaling Learner Experiment Setup. While Section 4.1 explores the effect of instructional interactivity across different model families, here we focus on how scaling the learner model within the same family influences learning outcomes. Specifically, we fix the instructor as Qwen2.5-72B and vary the learner model across Qwen2.5-1.5B, Qwen2.5-7B, and Qwen2.5- 32B. We evaluate the impact of instructional interactivity by comparing the Passive Consumption and Interactive Clarification paradigms. Training data and evaluation benchmarks are kept consistent with Section 4.1. Results are shown in Figure 3, where Origin refers to the performance of the untrained model, and Norm Acc denotes the accuracy of the post-training under each setting divided by its corresponding untrained baseline. Further implementation details are provided in Appendix A.3. Main Result. As shown in Figure 3, we observe a strong positive correlation between learner model scale and normalized learning gains: larger models consistently achieve higher performance improvements across all evaluation tasks. This trend holds for both Passive Consumption and Interactive Clarification, with the benefits of interactivity becoming more pronounced as model capacity increases. Qwen2.5-32B demonstrates substantial gains under interactive supervision across all eight math-related benchmarks. For instance, in NUMGLUE and SVAMP, it achieves $+ 2 5 . 7 \%$ and $+ 2 3 . 8 \%$ normalized improvement, compared to $+ 1 8 . 2 \%$ and $+ 1 6 . 8 \%$ in the passive setting—showing clear added value from clarification feedback. Similar trends appear in SIMULEQ and MATH, where interactive training leads to $+ 2 2 . 4 \%$ and $+ 1 1 . 0 \%$ improvements, respectively. In contrast, Qwen2.5-1.5B shows smaller gains overall, and the gap between passive and interactive learning is narrower. For example, in GSM8K and SAT, improvements under interactivity are only $+ 5 . 0 \%$ and $+ 3 . 5 \%$ , compared to $+ 3 . 8 \%$ and $+ 2 . 8 \%$ in the passive case. This indicates that limited capacity restricts the ability to absorb and act on richer supervision signals. # 5. Learning from Concept (LfC) We study Learning from Concept (LfC), where models leverage static, abstract knowledge, such as rules, definitions, or structured representations, to guide behavior or reason Origin Passive Interactive GSM8K SVAMP MATH NumGLUE Norm Acc (%) 1.00 10 1.5b 7b 32b 1.01 A 1.2 1.5b 7b 32b 1.1 1.0 1.5b 7b 32b 1.0 1.5b 7b 32b # Parameter # Parameter # Parameter # Parameter Norm Acc (%) SimulEq AQuA 1.10 SAT MMLU-Math 1.2 1.0 Norm Acc (%) .1 1.05 1.01 1.00 1.0 1.5b 7b 32b 1.5b 7b 32b 1.5b 7b 32b 1.5b 7b 32b # Parameter # Parameter # Parameter # Parameter ing. We evaluate this ability in two settings: (1) Structured Knowledge Injection in Competitive Environments, testing whether models can integrate conceptual hints to improve decision-making in competitive environments, and (2) Conceptual Generalization in Logic and Planning Tasks, examining generalization from symbolic structures in logic and planning tasks. Table 1. Accuracy on six tasks with and without concept input (baseline / +concept). Higher scores indicate better sub-goal completion or reasoning accuracy. Darker cell colors denote larger improvements; white means no change. # 5.1. Structured Knowledge Injection in Competitive Environments Experiment Setup. We evaluate whether conceptual knowledge improves strategic performance in multi-agent settings using TextArena (Guertler et al., 2025), a suite of competitive environments with symbolic rules and multiturn dynamics. Each game involves two players, Player-0 and Player-1. We fix Player-0 as Qwen2.5-32B and vary Player-1 across four scales: Qwen2.5-1.5B, 7B, 14B, and 32B. Prior to gameplay, Player-1 either receives no guidance (without Concept) or is given natural language descriptions of game rules and strategies (with Concept), generated by Qwen2.5-32B. For each environment and model pair, we run 20 matches and report Player-1’s win rate averaged over these games. Environments include Checkers (CH), Poker (PK), Stratego (ST), Tic Tac Toe (TT), Truth and Deception (TD), and Ultimate Tic Tac Toe (UTT). Performance is measured as Player-1’s win rate against the fixed opponent. Results are shown in Figure 4. Implementation details are provided in Appendix A.4. Main Result. Results are shown in Figure 4. We observe two key trends. First, model scale significantly influences the effectiveness of conceptual guidance. For smaller models such as Qwen2.5-1.5B, injecting conceptual descriptions consistently degrades performance across all environments—with Concept underperforms without Concept, suggesting that low-capacity models struggle to integrate abstract knowledge and may treat it as distractive noise. In contrast, as model size increases, the gap progressively narrows and eventually reverses. By Qwen2.5-14B and 32B, models consistently benefit from conceptual input across most tasks, indicating improved abstraction and utilization capabilities with scale. Second, win rates across tasks also increase monotonically with model size, regardless of condition, highlighting a strong correlation between scale and general strategic competence. Notably, high-conceptualload environments such as Stratego and Truth and Deception show the largest gains from conceptual input, especially for larger models. These results suggest that (i) conceptual understanding is scale-emergent, and (ii) injecting structured domain knowledge can provide a tangible advantage, if the model is sufficiently capable of internalizing it. # 5.2. Conceptual Generalization in Logic and Planning Tasks Experiment Setup. To evaluate rule-based generalization, we include six tasks centered on explicit symbolic or logical structures. LogicGrid and NaLogic feature structured and narrative logic puzzles (Mandell, 1986; Dudeney, 1907), testing deduction under constraint. Plan (Vallati et al., Figure 4. Win rates of Player-1 across six competitive environments from TextArena under the LfC setting. 2015) translates classical symbolic planning domains (e.g., Gripper, Barman) into natural language settings to assess structured action reasoning. Additionally, tasks from AlfWorld (Shridhar et al., 2021), ScienceWorld (Wang et al., 2022a), and BabyAI (Chevalier-Boisvert et al., 2019), models must complete goal-oriented tasks requiring commonsense reasoning, procedural knowledge, and spatial navigation. For the concept-injected setting, we use Qwen2.5-72B to generate static conceptual hints, which are provided as auxiliary input to smaller models during evaluation. Results are presented in Table 1 and Figure 4. We use subgoalbased annotations to enable fine-grained progress tracking. Implementation details are provided in Appendix A.4. Main Result. As shown in Table 1, providing external conceptual information, generated by Qwen2.5-72B, consistently improves performance across all model sizes and tasks. The improvements are most pronounced in smaller models, indicating that such auxiliary knowledge can partially offset limited inherent reasoning capacity. In symbolic reasoning tasks such as LogicGrid and NaLogic, concept injection yields consistent gains. For example, LogicGrid accuracy increases from 0.36 to 0.42 for Qwen2.5-7B, and from 0.54 to 0.57 for Qwen2.5-72B, suggesting that explicit structural cues remain beneficial even for larger models, though with reduced marginal benefit. In planning tasks like Plan, the gains are smaller but stable, e.g., from 0.14 to 0.17 in Qwen2.5-7B and 0.32 to 0.34 in Qwen2.5-14B, indicating that models can utilize conceptual breakdowns of action dynamics, particularly when capacity is constrained. For interactive environments such as AlfWorld, ScienceWorld, and BabyAI, we observe moderate but limited improvements. While concept injection provides useful structural cues, these tasks involve complex dynamics, such as exploration, multi-step control, and environment grounding, that are not fully addressed by static conceptual input. Notably, performance gains plateau for larger models, indicating that further progress may require richer supervision signals, interactive fine-tuning, or stronger state-tracking mechanisms beyond static abstractions. # 6. Learning from Experience (LfE) We investigate Learning from Experience (LfE), where models adapt by accumulating and utilizing prior interaction history. We evaluate this ability in two settings: (1) ExperienceDriven Adaptation in Competitive Games, where agents condition on past multi-round play to adapt strategic behavior; and (2) In-context Examples as Off-policy Exploration Supervisions, where LLMs learn from ICL-style demonstrations viewed as episodic traces. # 6.1. Experience-Driven Adaptation in Competitive Games Experiment Setup. We adopt the same experimental setup as in Section 5.1, using TextArena (Guertler et al., 2025), a suite of competitive environments featuring symbolic rules and multi-turn dynamics. The results are shown in Figure 5. Here, "with experience" refers to a setting where, during the $k$ -th game, the player selects three prior game experiences from rounds 0 to $k { - } 1$ (or all available ones if fewer than three exist), which are then used as ICL examples, allowing the model to learn from past experiences. Implementation details are provided in Appendix A.5. Main Result. Figure 5 presents win rates of Player-1 across six competitive environments in the TextArena benchmark, under two supervision regimes: without experience and with experience. Our analysis yields the following key findings: First, we observe a clear scaling trend in the efficacy of experience-based learning. For smaller models such as Qwen2.5-1.5B, the inclusion of past game trajectories generally leads to performance degradation across all environments. For example, in Checkers and Stratego, win rates drop from 0.30 to 0.11 and from 0.45 to 0.20, respectively. This suggests that low-capacity models struggle to extract relevant patterns from prior games and may treat such input as noise or distractors, leading to suboptimal decisions. Second, as model size increases, the ability to benefit from experience becomes more pronounced. Qwen2.5-7B exhibits mixed but improving trends, while Figure 5. Win rates of Player-1 across six competitive environments from TextArena under the LfE setting. Qwen2.5-14B and Qwen2.5-32B demonstrate consistent performance gains across most games when experience is incorporated. Notably, Qwen2.5-32B shows substantial improvements in complex environments like Stratego (0.6 to 0.8) and TruthAndDeception (0.6 to 0.9), indicating that larger models can better internalize structured histories and use them to inform strategy. Third, independent of experience usage, we observe a monotonic increase in win rates with model scale. This finding is consistent with results from learning from concept in Section 5.1, and confirms that larger models possess stronger general gameplay competence. Lastly, compared to concept-based generalization (Section 5.2), experience-based learning appears to be more cognitively demanding. Unlike abstract rules, game histories are long, structurally rich, and often noisy, posing challenges for information extraction and reuse. Even for Qwen2.5-32B, environments like UltimateTicTacToe show only modest gains, highlighting the inherent difficulty of experience-based generalization. These results demonstrate that (i) learning from experience is a scale-emergent ability, and (ii) its effectiveness is tightly coupled with a model’s capacity to integrate, abstract, and act upon past interactions. # 6.2. In-context Examples as Off-policy Exploration Supervisions Experiment Setup. To investigate how accumulated experience influences model behavior, we treat in-context examples as trajectories encoding implicit supervision. This setup parallels off-policy learning: unlike competitive gameplay in Section 6.1, where models adapt through active interaction (on-policy), here they condition passively on externally provided experience examples. In Figure 6a, we evaluate four base models, LLaMA3.1-8B, Qwen2.5-7B, Mistral-7B, and Phi-3-mini, under varying numbers of in-context exemplars. In Figure 6b, we scale the amount of examples up to 1300 and track the performance of Qwen2.5-14BInstruct-1M as well as Qwen2.5-7B-Instruct-1M variants. “Norm acc" in Figure 6b denotes accuracy normalized by the model’s zero-shot performance. Finally, in Figure 6c, we compare two supervision regimes: in-context learning and instruction tuning. We use MagpieMath (Xu et al., 2024a), as introduced in Section 4, as the source of experience (i.e., the ICL examples), and the test set is also drawn from it. All ICL-example experiments were run 20 times, and we report the mean and standard deviation. Implementation details are provided in Appendix A.5. Main Result. As shown in Figure 6a, all models exhibit a non-monotonic trend as the number of in-context examples increases: performance improves initially, peaks, and then declines. This highlights that current LLMs are effective few-shot learners but struggle to scale with longer trajectories, raising doubts about their many-shot capabilities. Figure 6b further confirms this limitation: when the number of in-context examples exceeds 900, performance drops sharply, even in larger models. This suggests that excessive exampls may exceed attention capacity or interfere with generalization. Additionally, the 14B model consistently outperforms the 7B model, suggesting greater robustness to many-shot degradation. Figure 6c compares few-shot learning and instruction tuning. When data is limited, few-shot learning is highly efficient, using just 3 examples matches the performance of tuning with $5 \mathrm { k }$ instances. However, as data increases, instruction tuning continues to improve, while few-shot performance declines. This suggests a practical strategy: use few-shot learning in low-data regimes, and switch to tuning as more data becomes available. # 7. LearnArena: A Benchmark Suite for General Learning Ability Experiment Setup. We construct a benchmark suite that evaluates LLMs’ general learning ability across three learning dimensions. The benchmark is built upon a modified version of the TextArena framework (Guertler et al., 2025), where each environment is cast as a two-player game between Player-0 and Player-1. We fix Player-0 as Qwen2.5- 32B and designate the evaluated LLM as Player-1. At each round $k$ , Player-1 selects three experiences from the previous $k - 1$ games and submits them to Player-0 for feedback. Player-0 provides suggestions for improvement, setting. (c) Comparing in-context learning with instruction tuning under different data scales. Figure 6. Experiments on learning from experience under off-policy exploration supervision. which Player-1 incorporates in the current round, representing Learning from Instructor. Separately, a concise summary of the game rules is generated by Qwen2.5-32B and given to Player-1, capturing Learning from Concept. In addition, Player-1 is encouraged to perform its own analysis of the selected past experiences and apply its conclusions in the current game, reflecting Learning from Experience. We evaluate each model-environment pair across 20 independent matches and report the average win rate of Player-1. The environments span a diverse set of strategic and social reasoning tasks, including Checkers (CH), Stratego (ST), Tic Tac Toe (TT), Truth and Deception (TD), SpellingBee (SB), SpiteAndMalice (SM), Tak (TK), and WordChains (WC). We evaluate the following models: Llama-3.1-8B, Mistral-7B-v0.3, Mistral-8B-2410, Qwen2.5-7B, Qwen2.5- 14B-Instruct, Qwen2.5-32B-Instruct, Qwen3-8B, Qwen3- 14B, GPT-4-o, and GPT-4-o-mini. Further implementation details are provided in Appendix A.6. Main Result. As shown in Table 2, the performance of LLMs varies widely across tasks, model families, and model scales, revealing several important trends. First, GPT-4o sets a new state-of-the-art, achieving an average win rate of 0.70 across the eight tasks, significantly outperforming all other models. Notably, GPT-4o demonstrates strong and consistent results across both symbolic reasoning tasks (e.g., 0.80 on Checkers and 0.81 on Tic Tac Toe) and social reasoning environments (e.g., 1.0 on Truth and Deception), showcasing its robust integration of instructional guidance, conceptual abstraction, and experiential adaptation. Its smaller variant, GPT-4o-mini, also performs competitively (0.51), matching Qwen2.5-32B and surpassing several larger models. Second, within the Qwen2.5 series, we observe scaledriven improvement: performance increases steadily from 7B (0.34) to 14B (0.47) and 32B (0.51), confirming that learning ability benefits from increased capacity, but only up to a point. The gains diminish with scale. Third, the newer Qwen3 models exhibit a significant leap in performance: Qwen3-8B reaches 0.49, while Qwen3-14B achieves 0.60, outperforming all other open-source models and even exceeding GPT-4o-mini. This indicates that beyond scale, architectural and training advancements play a crucial role in improving learning ability. Table 2. Win rates of Player-1 (evaluated model) across eight environments in the LearnArena benchmark. Bold indicates the best result for each task. Figure 7. Spearman rank correlation matrix between our benchmark and nine existing benchmarks. Comparison with Existing Benchmarks. To assess the uniqueness and relevance of our benchmark, we compare it with a broad set of widely used evaluation benchmarks by computing Spearman rank correlations between systemlevel rankings. These benchmarks include instructionfollowing datasets such as AlpacaEval (Dubois et al., 2024), Vicuna (Chiang et al., 2023), Self-Instruct (Wang et al., 2022b), and WizardLM (Xu et al., 2023), as well as standard task-specific evaluations including ARC (Clark et al., 2018) (science reasoning), HellaSwag (Zellers et al., 2019) and Winogrande (Sakaguchi et al., 2019) (commonsense inference), GSM8K (Cobbe et al., 2021) (mathematical reasoning), and MMLU (Hendrycks et al., 2021a) (broad-domain knowledge). As shown in Figure 7, our benchmark shows low correlation with all existing benchmarks, with the highest only reaching a moderate level (0.66), indicating it captures distinct aspects of model performance. Among all benchmarks, our strongest correlations are with AlpacaEval $( \rho = 0 . 6 6 )$ and MMLU $\zeta _ { \rho } = 0 . 6 6 )$ , followed by HellaSwag ${ \mathit { \acute { \rho } } } = 0 . 5 3 { \mathrm { \acute { \iota } } }$ ), WizardLM $\zeta \rho = 0 . 4 4 )$ , and Self-Instruct ( $\begin{array} { r } { { \bf \Pi } _ { \rho } = 0 . 4 3 , } \end{array}$ ). The correlation with Vicuna is relatively low $( \rho = 0 . 2 5 )$ , and correlations with ARC, Winogrande, and GSM8K are near zero. These results indicate that while there is some overlap between our benchmark and existing instruction-following or knowledge-based tasks, our evaluation framework provides a different and complementary signal. This divergence is expected, given the design objective of our benchmark. Unlike most prior benchmarks that focus on static task-solving or narrow instruction-following performance, our benchmark is explicitly constructed to evaluate general learning ability. It is grounded in a cognitive framework that integrates three key dimensions: learning from instructor, learning from concept, and learning from experience. However, our benchmark does not treat these dimensions as isolated tasks or modules. Instead, we embed them holistically across the benchmark’s task suite, such that model success requires simultaneously demonstrating the ability to absorb guidance, abstract rules, and dynamic feedback. As a result, our benchmark presents a more integrated and realistic test of adaptive intelligence, resembling how learning occurs in natural human settings. The varying degrees of correlation across external benchmarks further illuminate how different datasets capture different cognitive demands. Benchmarks like ARC focus on domain-specific problemsolving (e.g., science) with fixed question-answer formats. These tasks require correctness but do not test a model’s ability to update its behavior through interaction or abstraction, which likely explains the near-zero correlation with our benchmark. On the other hand, AlpacaEval and WizardLM emphasize open-ended instruction-following, which partially aligns with our learning-from-instructor dimension. MMLU, though knowledge-centric, requires broad generalization across domains and may tap into conceptual understanding and knowledge transfer, hence the higher correlation. These findings confirm that our benchmark captures a distinct and underexplored aspect of LLM evaluation by assessing general-purpose learning through the integrated use of instruction, abstraction, and adaptation.
Large language models (LLMs) have shown impressive capabilities across tasks such as mathematics, coding, and reasoning, yet their learning ability, which is crucial for adapting to dynamic environments and acquiring new knowledge, remains underexplored. In this work, we address this gap by introducing a framework inspired by cognitive psychology and education. Specifically, we decompose general learning ability into three distinct, complementary dimensions: Learning from Instructor (acquiring knowledge via explicit guidance), Learning from Concept (internalizing abstract structures and generalizing to new contexts), and Learning from Experience (adapting through accumulated exploration and feedback). We conduct a comprehensive empirical study across the three learning dimensions and identify several insightful findings, such as (i) interaction improves learning; (ii) conceptual understanding is scale-emergent and benefits larger models; and (iii) LLMs are effective few-shot learners but not many-shot learners. Based on our framework and empirical findings, we introduce a benchmark that provides a unified and realistic evaluation of LLMs' general learning abilities across three learning cognition dimensions. It enables diagnostic insights and supports evaluation and development of more adaptive and human-like models.
[ "cs.CL", "cs.AI" ]
# 1 Introduction DevOps practitioners rely on configuration management tools like Ansible for IT automation tasks. In order to complete these tasks, practitioners use scripts, which are referred to as automation scripts (Parnin et al., 2017). While these scripts save time and manage thousands of servers (ansible, 2022), practitioners still struggle to develop them correctly for intended IT automation tasks (Begoug et al., 2023). These challenges stem from (i) domain-specific languages with distinct syntax and semantics (Rahman et al., 2020), (ii) diverse IT automation tasks across OSes and cloud platforms (Begoug et al., 2023), and (iii) state reconciliation, which demands accurate infrastructure assessment and regulation (Hassan et al., 2024). Unsurprisingly, these challenges have sparked practitioner frustration and concern (Tanzil et al., 2023; NFsaavedra, 2024). Given the success of large language models (LLMs) in code generation (Li et al., 2024a; Chen et al., 2021), we hypothesize that they are wellsuited for automating IT tasks through automation scripts. For automation scripts, it is not enough to generate syntactically correct code; models must also accurately interpret task requirements and produce scripts that achieve the desired system state (Drosos et al., 2024) to ensure successful execution. Even minor errors such as using an incorrect module or misplacing a variable, can lead to catastrophic security risks or system failures. While benchmarks (Chen et al., 2021; Iyer et al., 2018; Odena et al., 2021) have spurred progress in code generation, they emphasize static correctness and overlook a critical question: Can LLMgenerated code actually provide solutions for IT automation tasks? Indeed, existing approaches often lack dynamic execution testing or operate in constrained settings—for example, Ansible Wisdom (Pujar et al., 2023) relies on BLEU scores, Ansible Lightspeed (Hat, 2023) focuses on isolated tasks, and IaC-Eval (Kon et al., 2024) uses artificially generated configurations curated by human annotators. As a result, LLMs’ ability to generate robust, executable Ansible scripts for real-world IT automation tasks remains largely under-explored. To address this gap, we present ITAB, a benchmark for evaluating LLMs on their ability to generate executable IT automation scripts from realworld, user-authored natural language prompts. ITAB includes 126 tasks, carefully selected across seven key IT automation domains defined by Begoug et al. (2023): (1) Server Configuration, (2) Networking, (3) Policy Configuration, (4) Templating, (5) Deployment Pipelines, (6) Variable Management, and (7) File Management (details in Section 3.1). Each task must satisfy specific operational constraints that reflect the intended system state described by the user. ITAB is different from previous work in multiple ways. First, benchmarks like IaC-Eval (Kon et al., 2024) assess whether generated code aligns with infrastructure intent specifications, while ITAB focuses on functional correctness via dynamic execution of IT automation tasks in realistic environments—marking a clear contrast with static code analysis approaches (Srivatsa et al., 2023). Second, To support reliable execution testing, we augment LLM prompts with essential context—such as file paths, configurations, and initial states—to bridge the gap between vague user instructions and executable automation scripts. Third, ITAB specifically evaluates how LLMs handle state reconciliation-a fundamental property of IT automation tools like Ansible (Hassan et al., 2024) where the orchestrator infers desired states from scripts, compares them with current states, and applies only necessary changes (Rahman and Parnin, 2023). ITAB tests this capability by validating tasks in controlled environments with predefined initial states, verifying if the resulting system state matches user requirements. Using our test suite, we evaluated 14 opensource LLMs—selected for their accessibility, collaborative value, and cost-efficiency over proprietary models (Manchanda et al., 2024; Oketch et al., 2025)—by varying prompt specificity (TELeR levels (Santu and Feng, 2023)) and sampling temperature (Section 3.4). Overall, success rates in achieving the desired system state were strikingly low, particularly on the first attempt (pass@1), with Templating and Variable Management standing out as the most challenging IT automation domains. Error analysis of 1,411 execution failures— instances where LLM-generated scripts did not achieve the intended system state—reveals prevalent semantic errors clustering into two fundamental categories: state reconciliation related reasoning failures $( 4 4 . 8 7 \% )$ related to how models track and manage system state across tasks, and deficiencies in module-specific execution knowledge $( 2 4 . 3 7 \% )$ . We observe sampling trade-offs: higher temperatures improve diversity and pass@10 for complex scenarios, while lower temperatures enhance reliability (pass@1). Our error taxonomy provides insights for improving LLMs in IT automation and advancing research on executable reasoning. Our contributions are: 1. Execution-driven Benchmark for IT Automation Tasks: ITAB evaluates operational correctness, not just syntax, using real-world IT automation tasks with automated execution validation. 2. Error Taxonomy: We identify nine specific error categories in LLM-generated IT automation script—variable issues, host issues, path issues, attribute configuration, template errors, logic & compliance problems, module errors, output format errors, and syntax errors—establishing a standard taxonomy to reveal fundamental gaps in LLMs’ ability to track state and follow instructions. # 2 Related Works Large Language Models (LLMs) are widely evaluated on benchmarks (Chen et al., 2021; Odena et al., 2021; Iyer et al., 2018), which focus on static code generation but overlook real-world executability. Enhanced benchmarks (Yu et al., 2024b; Zhuo et al., 2024; Jimenez et al., 2024; Yang et al., 2025; Lai et al., 2023; Zheng et al., 2025; Li et al., 2024b; Xie et al., 2024; Zhu et al., 2025; Peng et al., 2025) introduce more realistic tasks, while multilingual evaluations (Peng et al., 2024; Awal et al., 2025; Luo et al., 2025) test across languages. Semantic parsing benchmarks (Long et al., 2016; Yu et al., 2018; Yin et al., 2018a; Li et al., 2025; Yu et al., 2024a) assess natural language to code translation but often overlook system-level correctness in IT automation contexts. Within the IT automation domain specifically, benchmarks like IaC-Eval (Kon et al., 2024) (using human-curated synthetic scenarios), WISDOMAnsible (Pujar et al., 2023) (using BLEU scores instead of execution), and others (Khan et al., 2025; Srivatsa et al., 2024; Scheuner et al., 2014; Ragothaman and Udayakumar, 2024; Hat, 2023) evaluate LLMs on infrastructure tasks but do not include tasks that guarantees ambiguity of real-world practitioner queries. This gap is particularly problematic because IT automation requires grounding language in executable actions and system state transitions. State reconciliation is fundamental to tools like Ansible, where automation tools infer … . tasks: tasks: - name: Set fact for CentOs 7 - name: Set fact for CentOs 7 set_fact: set_fact: patch_name:'centos7-updates' patch_name:'centos7-updates' $\mathbf { \mu } = \mathbf { \vec { \tau } } \mathbf { \vec { \tau } }$ $= =$ asie patch_name:'centos8-updates' patch_name:'centos8-updates' gathered as a string,not an integer. when:ansible_distribution_major_version $= = ^ { 1 } 8 ^ { 1 }$ when:ansible_distribution_major_version $= = 8$ - name: Patch name display - name: Patch name display debug: debug: As failed in previous two tasks,fails msg:'Patch Name {{ patch_name }}' msg: 'Patch Name { patch_name }}' here too as patch_name was neverset (a) Syntactically and functionally correct script. desired states, compare with current states, and apply necessary changes (Rahman and Parnin, 2023; Hassan et al., 2024). While existing studies (Wang et al., 2024a; Anandayuvaraj et al., 2024; Dou et al., 2024; Wang et al., 2024b; Chen et al., 2024) examine LLM failure patterns, they overlook challenges in reasoning about environment state and modulespecific execution knowledge in IT automation. ITAB addresses these gaps with executable IT automation tasks from real-world issues, enabling dynamic LLM evaluation and introducing an error taxonomy that exposes key failures by LLMs in handling state reconciliation and applying domainspecific knowledge. # 3 Methodology To concretely evaluate LLM-generated code against potentially ambiguous IT automation requirements, ITAB utilizes Ansible, an open-source tool where operators use YAML playbooks and task-specific modules to declaratively define a system’s desired state. Assessing generation against this declarative, state-based paradigm is crucial, because syntactic correctness alone does not guarantee that a script achieves the user’s intended operational outcome. Thus, ITAB frames tasks as Ansible problems where LLMs must produce playbooks that verifiably achieve a target system state, often involving state reconciliation. # 3.1 Task Collection and Curation To construct a benchmark with tasks that genuinely test this ability to achieve specific system states via Ansible, we began with a corpus of 52,727 Stack Overflow posts (Begoug et al., 2023) on IT Automation. Stack Overflow is a valuable resource as it contains a vast repository of user-authored questions reflecting real-world scenarios and the natural language ambiguities inherent in such problem descriptions (Yin et al., 2018b). Our benchmark development commenced with a rigorous Data Curation Phase to refine this large corpus into a high-quality set of executable candidate tasks. This phase involved two main steps: Table 1: Distribution of the $1 2 6 ~ \mathrm { I T }$ automation tasks across the $7 \Pi$ automation domains. Stratified Sampling: To ensure our benchmark reflects the diversity of real-world Ansible usage, We first identified seven key IT automation domains based on prior analysis (Begoug et al., 2023), then applied proportionate stratified random sampling to the initial corpus to ensure topic diversity. This produced a more manageable set of “Sampled Issues” aligned with observed distributions. Curation & Filtering: The “Sampled Issues” then were further refined through automated filtering to retain posts relevant to core Ansible functionalities (e.g., involving common modules like ansible.builtin and community.general) and suitable for execution. Subsequently, a rigorous manual validation stage was performed where each potential task was assessed for clarity of user intent, feasibility of implementation and validation within our defined environment. This curation process produced 126 executable tasks forming the IT Automation Task Benchmark (ITAB). Table 1 shows their distribution across seven IT automation domains, reflecting common Ansible use cases (Begoug et al., 2023). </> Engrnering : ExeLusion :name:Ansible Playbook1 Syntactic 1 ( . … N validation Valid Yaml Yes <> √ Task Extracted iMinedt in TELeR Prompts Acquisition Phase Generated Ansible Playbooks No Playbook 1 Log Issue State Decompose Transformation J Unit Test Environment 三 Assertion Analysis Execusion W Outcome Validation |Envestorent Collected |Persistence Transtaormed Dovkernmsed Termination Phase Execusion Phase # 3.2 Dynamic Test Case Development We transform the curated tasks into executable test cases with validation of whether LLM-generated code achieves the intended system state. For each task in our collection, we implemented a structured transformation process: 1. Context Analysis: We identified the implicit system requirements, dependencies, and environmental constraints from the original question and answers. This involved careful examination of both the question text and accepted solutions to extract the underlying automation intent, often requiring domain expertise to interpret implied requirements not explicitly stated. 2. State Definition: We formalized both initial and target states for each task. The initial state represents the system before automation, while the target state encapsulates the desired outcome after successful execution. This step required translating often ambiguous user requirements into precise, verifiable system configurations that could be automatically validated. 3. Parameter Identification: We extracted key variables that might affect execution outcomes, such as file paths, service names, configuration values, and host-specific settings. This step was crucial for ensuring that our test cases could properly evaluate how LLMs handle variable substitution, path resolution, and template rendering-core capabilities for effective IT automation. 4. Determining Functional Correctness: For each task, we developed specific assertions to verify successful state reconciliation. These assertions checked multiple aspects of the system state including file contents, service status, and configuration values to verify the automation achieved its intended purpose. Figure 1 illustrates the distinction between syntactic and functional correctness in Ansible automation. While both scripts pass syntax validation, the right implementation fails because it incorrectly compares a string variable with integers and subsequently references an undefined variable. The left implementation correctly handles state reconciliation by using proper string comparison and default values, demonstrating why execution-based validation is essential. To create consistent testing environments, we containerized each test scenario with precisely controlled initial states, standardized environment variables and system configurations, and created taskspecific verification scripts that assess whether the desired state was achieved. This development—led by authors with Ansible and Python expertise—produced the “Test Case Collection” for ITAB, comprising 733 test cases across 126 tasks. Our approach detects subtle failures in state reconciliation that static analysis or simple execution logging would miss. With these robust test cases in place, we next implemented a testing framework and execution pipeline to systematically evaluate LLM-generated solutions. # 3.3 Testing Framework & Execution Pipeline To assess operational correctness, we use a Dynamic Validation Process (Figure 2) that executes each generated playbook and checks it against taskspecific constraints defined in Section 3.2. The pipeline consists of three main phases: Acquisition Phase: This phase selects an automation issue from the test case collection and uses TELeR prompts (Section 3.4) to generate candidate playbooks via LLMs. Playbooks are then validated for YAML syntax and Ansible structure. Invalid outputs are logged and skipped; valid playbooks proceed to execution. Execution Phase: For each valid playbook, this phase begins by constructing a task-specific, isolated Docker environment. The playbook is then executed within this environment, triggering a transformation of system state. Our custom Python validation scripts (Section 3.2) analyze the resulting state to check whether the defined operational constraints were satisfied. Assertions based on this analysis determine the final Pass or Fail outcome. Termination Phase: This phase finalizes the evaluation, whether or not the playbook was executed. Results—including Pass/Fail status, logs, and error messages—are recorded. After each task, the docker environment is reset to the default state. # 3.4 Experimental Setup Using the ITAB benchmark and evaluation pipeline (Sections 3.1–3.3), we evaluated the ability of 14 open-source LLMs (Table 12) to generate operationally correct Ansible playbooks across 126 realworld tasks. These models span sizes from 3B to 14B parameters. We systematically varied two generation parameters: TELeR prompt levels and sampling temperature. TELeR Levels 1–3 (Santu and Feng, 2023) were used to adjust prompt specificity; higher levels encode more structured task descriptions. We focused on these levels due to the lack of few-shot examples and external documents needed for higherlevel prompting. Sampling temperature (Ackley et al., 1985) was set to 0.2, 0.4, 0.6, and 0.8 to balance deterministic and exploratory behavior. For each unique (model, task, TELeR level, temperature) configuration, we generated 15 scripts to enable robust performance estimation. Generated playbooks were evaluated using the pass $@ \mathbf { k }$ metric $( k \in { 1 , 3 , 5 , 1 0 } )$ )(Chen et al., 2021). A sample was considered successful only if it was syntactically valid, executed correctly in our test framework (Section3.3), and met the expected outcome. All experiments were run on uniform hardware (NVIDIA H100 GPUs). # 4 Results and Analysis # 4.1 Overall Performance Across Models We evaluate LLMs’ Ansible code generation using pass $\boldsymbol { \mathcal { \Theta } } \mathbf { k }$ , averaged over 126 tasks, prompt styles, and temperatures (Table 2). The results reveal the difficulty of ITAB: pass $\ @ \mathbf { 1 }$ scores are below $4 \%$ for nearly all models, showing that reliably generating correct automation script on the first attempt remains a major challenge—even with environment context. While pass $@ 1 0$ improves slightly (Qwen-Coder at $1 2 . 0 \%$ , overall success remains modest, highlighting the gap between syntactic fluency and true operational correctness in IT automation tasks. Table 2: pass $@ \mathbf { k }$ for selected LLMs on ITAB (avg. over tasks, prompts, temps). Pretraining data composition affects performance, with top models like Qwen2.5-Coder-7B ( $70 \%$ code $20 \%$ text (Hui et al., 2024; Yang et al., 2024)) and DeepSeek-Coder-V2 ( $60 \%$ code $130 \%$ text (Zhu et al., 2024)) effectively combining code and language input—key for understanding prompts and generating correct Ansible. Qwen2.5- Coder-7B’s edge over its base model underscores how code specialization benefits from strong natural language grounding. The general-purpose Llama-3.1-8B ( $^ { 1 5 \mathrm { T + } }$ tokens (Dubey et al., 2024)) outperformed codespecialized CodeLlama models (e.g., 13B, $8 5 \%$ code/500B tokens (Rozière et al., 2023)), suggesting massive scale and language understanding can be more vital than high code ratios for instructiongrounded IT automation tasks. These trends hint that success in ITAB might be influenced not only by code volume but also by the interplay of code exposure, general language understanding, and dataset scale. # 4.2 Impact of Temperature and Prompting Beyond overall success rates, this section analyzes how generation configurations—specifically sampling temperature and prompt detail—influence LLM performance on IT automation tasks. Sampling Temperature: Sampling temperature significantly impacted performance, revealing a clear trade-off. As shown in Figure 3 (a), lower temperatures (e.g., 0.2) maximized pass $\ @ 1$ scores by favoring reliable, deterministic outputs. Conversely, higher temperatures (0.6–0.8) boosted pass $@ 1 0$ by increasing output diversity, aiding discovery in complex tasks. This highlights a practical precision-versus-exploration dilemma for tuning generation. Figure 3: Impact of decoding parameters for the top 6 LLMs. (a) $\mathrm { P a s s } @ 1$ and $\mathrm { P a s s } @ 1 0$ across temperature settings. (b) $\mathrm { P a s s } @ 1$ and Pass $@ 1 0$ across TELeR levels. A greater distance from the center indicates higher success rates and all values are shown as percentages. Prompt Detail (TELeR Levels): While Figure 3 (b) shows higher TELeR levels often boost $\mathrm { P a s s } @ 1 0$ accuracy for capable models like Qwen2.5-Coder-7B by leveraging richer input, increased prompt detail is not uniformly advantageous. It can also result in overly complex or ’overengineered’ solutions (Section 6), with the overall benefit varying by model capability. Table 3: pass $@ \mathbf { k }$ with error-avoidance prompts (avg. across configs). Error-Aware Prompting: To test if targeted guidance helps, we used error-aware prompts informed by our failure taxonomy (Section 5), including hints against common mistakes. However, Table 3 shows this yielded only marginal pass $@ \mathbf { k }$ improvements (1–4 percentage points), indicating that prompt modifications alone poorly mitigate models’ core challenges in state reconciliation reasoning and module-specific knowledge. # 4.3 Performance by IT Automation Domains Figure 4 analyzes LLM performance across IT automation domains, with rows representing domains, columns as model families, and color intensity indicating pass $\ @ 1$ or pass $@ 1 0$ rates. While models performed better on routine tasks like Server Configuration and File Management, categories requiring precise state and variable handling—notably Templating and Variable Management—proved far more challenging, exhibiting the lowest pass $@ \mathbf { k }$ scores. This suggests these statesensitive domains demand reasoning beyond basic syntax or module use, aligning with our error analysis (Section 5) where difficulties in state reconciliation related issues are prevalent. # 5 Error Taxonomy As we saw the low pass $\boldsymbol { \ @ } \mathbf { k }$ values, we wanted to investigate what really lies behind the numbers. We conducted an extensive qualitative study of 1,411 execution failures spanning across IT automation domains and models, creating a taxonomy of errors in open-source models generating Ansible script for real-world issues. Our error analysis reveals striking patterns. Notably, models distilled for enhanced reasoning overwhelmingly fail at basic syntax—suggesting that general reasoning does not readily transfer to structured code domains like IT automation without explicit domain grounding. For the other evaluated LLMs, failures predominantly extend beyond syntax and cluster into two fundamental semantic categories: First, deficiencies in module-specific execution knowledge are prevalent. These models may identify appropriate Ansible modules but frequently err in their precise implementation, with Attribute and Parameter Errors being common. This highlights a gap where models understand what automation action is needed but not how to correctly configure module specifics. Second, failures in state reconciliation related reasoning are widespread. This encompasses errors in variable handling and path navigation—underscoring difficulties in tracking state across complex scopes (playbooks, roles, templates)—and flawed template (Jinja2) logic, indicating problems reasoning about dynamic content generation. The distribution of these state-related errors (including variable, host, path, and template issues) also varies by model architecture, suggesting differing blindspots in contextual reasoning. Pass@1 Performance by Model Family Pass@10 Performance by Model Family Figure 4: Pass $\ @ 1$ and Pass $@ 1 0$ Performance Across IT Automation Domain Clusters for Four Model Families This analysis reveals key gaps—from basic syntax issues in some models to deeper failures in state reasoning and module-specific knowledge—explaining the low pass $@ \mathbf { k }$ performance on executable IT automation tasks. # 6 Qualitative Examples To complement our quantitative results, we qualitatively analyze Qwen-Coder—the top-performing model—on representative automation tasks. For instance, examining the impact of decoding parameters revealed interesting trade-offs. In the targeted host execution (centos1) task, increasing sampling temperature (T) broadened output diversity and the variety of errors, with distinct error categories rising from four at $\mathrm { T } { = } 0 . 2$ to eight at $_ { \mathrm { T = 0 } . 8 }$ (Table 5). Temperature also played a key role in discovering specific correct behavior: in the check mode execusion task, only samples at $_ { \mathrm { T = 0 } . 8 }$ correctly applied check_mode in two instances, example in Listing 1, which was absent at lower temperatures. On the multi-server directory setup task, more detailed prompts (TELeR Level 3) produced correct but overengineered solutions, including redundant file creation and validations that could reduce maintainability. Table 4: Distribution of Error Types $( \% )$ Across Models. Bold values indicate the most frequent error category per model. To avoid skew, syntax-heavy error distributions from reasoning-distilled models are excluded from the aggregated percentages. Table 5: Error Diversity by Temperature for ‘Targeted host execution’. These case studies show that even top models are sensitive to temperature, prompt design, and task complexity—higher temperatures boost exploration but increase errors, while detailed prompts aid correctness but can add unnecessary complexity. Listing 1: Use of Check Mode to Control Email Sending # 7 Discussion Evaluation using ITAB reveals significant challenges for current open-source LLMs in generating operationally correct Ansible code, highlighting a critical gap between syntactic validity and reliable execution in complex, stateful IT automation. Extremely low success rates ( $\operatorname { p a s s } @ 1 0 \ \operatorname* { m a x }$ $1 2 . 0 \% )$ reveal LLMs struggle with functional correctness for real-world instructions, even with context. Our analysis indicates that models pretrained on extensive code and substantial natural language outperform those with less balanced data. This suggests success in $\Pi$ automation demands not just coding ability but also robust interpretation of complex user instructions. Furthermore, our analysis of generation parameters showed that while sampling temperature provides a crucial lever for balancing reliability $\left( \mathbf { p a s s } @ \mathbf { 1 } \right)$ and exploration $\bf ( p a s s @ 1 0 )$ . prompt engineering yielded limited net improvements. While higher TELeR levels offered some multi-sample gains for certain models, this was often offset by increased solution complexity. Similarly, incorporating error-aware guidance improved performance significantly. This suggests that even with stronger prompts, models struggle due to more fundamental limitations—such as reasoning about system state, interpreting variable scopes, and adhering to procedural constraints. These difficulties-especially in managing variable resolution and template logic-manifest clearly in task-specific performance and error patterns. Models consistently struggled with $\Pi$ automation domains requiring precise state and variable handling, namely Templating (using Jinja2) and Variable Management (Section 4.3). Our error analysis confirms this pattern, revealing that conventional LLMs primarily fail due to state reconciliation reasoning issues $( 4 4 . 7 \%$ of errors across variable, host, path, and template categories) and limited module-specific execution knowledge $( 2 4 . 3 7 \% )$ , confirming that tracking state changes and applying domain-specific knowledge are primary bottlenecks. Our findings imply that current open-source LLMs exhibit critical deficits in state reasoning and precise execution, hindering reliable IT automation. Overcoming key bottlenecks, such as variable and template management, demands more than prompt tuning—pointing to needs for architectural or domain-specific enhancements. Consequently, the observed high failure rates mandate dynamic execution validation, as ITAB provides, since static checks alone cannot ensure operational safety and correctness. On the brighter side, ITAB introduces a novel and challenging benchmark—difficult even for leading open-source models like DeepSeek and Qwen—laying the foundation for future research in AI-powered IT automation, including advances in model architectures, fine-tuning strategies, and domain-specific reasoning.
LLMs show promise in code generation, yet their effectiveness for IT automation tasks, particularly for tools like Ansible, remains understudied. Existing benchmarks rely primarily on synthetic tasks that fail to capture the needs of practitioners who use IT automation tools, such as Ansible. We present ITAB (IT Automation Task Benchmark), a benchmark of 126 diverse tasks (e.g., configuring servers, managing files) where each task accounts for state reconciliation: a property unique to IT automation tools. ITAB evaluates LLMs' ability to generate functional Ansible automation scripts via dynamic execution in controlled environments. We evaluate 14 open-source LLMs, none of which accomplish pass@10 at a rate beyond 12%. To explain these low scores, we analyze 1,411 execution failures across the evaluated LLMs and identify two main categories of prevalent semantic errors: failures in state reconciliation related reasoning (44.87% combined from variable (11.43%), host (11.84%), path(11.63%), and template (9.97%) issues) and deficiencies in module-specific execution knowledge (24.37% combined from Attribute and parameter (14.44%) and module (9.93%) errors). Our findings reveal key limitations in open-source LLMs' ability to track state changes and apply specialized module knowledge, indicating that reliable IT automation will require major advances in state reasoning and domain-specific execution understanding.
[ "cs.CL", "cs.SE" ]
# I. INTRODUCTION Visual content plays an increasingly important role in our current digital ecosystem. With the proliferation of smartphones, tablets, and other digital devices, the consumption of video content has surged across a wide range of applications, including live streaming, digital broadcasting, video conferencing, and intelligent surveillance. These video-centric services now account for a dominant share — approximately $80 \%$ — of global internet traffic, as reported by Cisco [1]. At the core of enabling efficient video transmission lies video compression, a fundamental and long-standing research area in image and video processing. Its role is vital in balancing the trade-off between the high bitrate required to preserve rich, immersive video content and the limited bandwidth resources typically available in real-world scenarios. Over the past decades, the field has seen remarkable progress, leading to the development of cutting-edge video coding standards such as H.265/HEVC [2], H.266/VVC [3] and AOMedia (AOM) AV1 [4]. Despite these advancements, current video codecs still rely on traditional rate-distortion optimization frameworks inherited from predecessors like HEVC and VP9 [5]. More recently, both MPEG and AOM have initiated the exploration of new video coding algorithms beyond their latest standards, with working codecs ECM (Enhanced Compression Model) [6] and AVM (AOM Video Model) [7], respectively. Fig. 1: The applied coding framework, with a VSR-HE module serving as SR. However, current solutions in these models are based on their legacy foundations, which may limit their ability to fully meet the rapidly escalating demands of next-generation media applications, especially when operating at ultra-high spatial resolutions and maintaining a delicate balance between encoding/decoding efficiency and overall performance [8]. To address these limitations, deep learning has emerged as a transformative tool for video compression. Inspired by the advances of image super-resolution [9–15], a growing body of learning-based VSR coding approaches has been proposed in recent years [16–25], demonstrating impressive improvements in coding efficiency when integrated into standard video codecs. It is noted that, however, most CNN-based video compression techniques are trained using simple distortionbased loss functions, such as mean squared error (MSE) or L1 loss. While these metrics are easy to compute, they correlate poorly with human visual perception and often result in suboptimal perceptual quality [18]. In this paper, we propose a deep learning-based video superresolution approach, which has been submitted to the ICME 2025 Grand Challenge on Video Super-Resolution for Video Conferencing (Track 1 & 2). The proposed method builds upon a previously developed efficient architecture, the HiET block [12], and employs a perceptual loss function (PLF) combined with GAN-based training, inspired by the CVEGAN framework [18]. In addition to the training set provided by Fig. 2: The architecture of the proposed network architecture for super resolution. The HiET layers are adopted from [12]. Window sizes are set to [64, 32, 8, 32, 64], with $B = 6$ and the channel dimension of 126. Fig. 3: Sequence thumbnails of training content from BVI-AOM [26] dataset. the organizers, we also used the BVI-AOM [26] database as a supplement to further improve the model generalization and boost the performance. In accordance with the challenge requirements, our method strictly processes each frame independently during upscaling and enhancement. This approach, denoted as VSR-HE, has been evaluated on H.265/HEVC compressed content (ICME Grand Challenge validation video sequences) and demonstrates consistent improvements across multiple evaluation metrics—including PSNR, SSIM, MSSSIM, and VMAF. Notably, it outperforms both conventional upscaling methods such as bicubic filter and recent learningbased VSR models such as EDSR [14] and SwinIR [10]. The rest of the paper is organized as follows. Section II describes the proposed VSR-HE method, the integrated coding framework, and the training process. The coding results are then presented in Section III. Finally, Section IV concludes the paper and outlines the future work. # II. PROPOSED ALGORITHM The coding framework is illustrated in Fig. 1. Prior to encoding, the original YUV 720p videos are downsampled by a factor of 4 using a bicubic filter for Track 1, whereas the original YUV 1080p videos are downsampled by a factor of 4 using the Lanczos filter for Track 2. A H.265/HEVC video codec [2] serves as the Host Encoder to compress the low-resolution video in a low-power setting tailored to lowdelay conferencing scenarios. At the decoder, when the lowresolution video stream is decoded, the proposed VSR-HE approach is applied to reconstruct the full-resolution video content. Details regarding the network architecture and the training process are described below. # A. Employed Network Architecture The overall architecture of the proposed model is depicted in Fig. 2. Specifically, a compressed $6 4 \times 6 4$ YCbCr 4:2:0 image block is first processed by a nearest-neighbor (NN) upsampling operation to restore its chroma channels, resulting in a $6 4 \times 6 4$ YCbCr 4:4:4 input. This preprocessed block is then fed into the super-resolution network, which is designed to predict a high-resolution $2 5 6 \times 2 5 6$ YCbCr 4:4:4 image block, achieving a spatial upscaling factor of $4 \times$ . To ensure compatibility with standard coding pipelines, the network output is subsequently converted back to the YCbCr 4:2:0 format. Fig. 4: Visual comparison of track1 SR reconstruction results. The network backbone is constructed based on the recently proposed HiET (Hierarchical Encoding Transformer) layer [12], which efficiently captures both local spatial structures and long-range contextual dependencies. Building upon this design, we propose a refined network architecture specifically optimized for compressed-domain restoration and superresolution tasks. As shown in Fig. 2, the HiET layers are configured with window sizes of [64, 32, 8, 32, 64], where the number of stacked blocks is set to $B \ = \ 6$ and the hidden channel dimension is fixed at 126. This configuration is carefully selected to balance model capacity and computational efficiency, making it particularly suitable for practical deployment in video conferencing scenarios. # B. Training Configuration The training process of the proposed VSR-HE model is divided into two stages. In the first stage, the network is optimized using a combined perceptual loss function based on [18], which balances pixelwise accuracy and perceptual fidelity: $$ \mathcal { L } _ { p } = 0 . 3 \mathcal { L } _ { L 1 } + 0 . 2 \mathcal { L } _ { S S I M } + 0 . 1 \mathcal { L } _ { L \mathcal { L } } + 0 . 4 \mathcal { L } _ { M S - S S I M } $$ where $\mathcal { L } _ { L 1 }$ and $\mathcal { L } _ { L 2 }$ denote the pixel-wise L1 and L2 losses, respectively, while $\mathcal { L } _ { S S I M }$ and $\mathcal { L } _ { M S - S S I M }$ represent the Structural Similarity Index and its multi-scale variant. This combined objective ensures both structural preservation and enhanced perceptual quality during the early training phase. In the second stage, following the strategy proposed in [27], we further introduce an adversarial loss component based on the GAN framework to refine the perceptual realism of the super-resolved outputs. The total loss in this stage is formulated as the weighted sum of the perceptual loss $\mathcal { L } _ { p }$ and the GAN loss $\mathcal { L } _ { G A N }$ . $$ \mathcal { L } _ { t o t a l } = \mathcal { L } _ { p } + 0 . 0 5 \mathcal { L } _ { G A N } . $$ Fig. 5: Visual comparison of track2 SR reconstruction results. The employed model is implemented using PyTorch 1.10 [28]. Training is performed with the Adam optimizer [29], with default hyperparameters $\beta _ { 1 } = 0 . 9$ and $\beta _ { 2 } ~ = ~ 0 . 9 9 9$ . A batch size of 16 is used. The learning rate is initially set to $1 \times 1 0 ^ { - 4 }$ and is progressively reduced by a factor of 2 at 50k, 100k, $2 0 0 \mathrm { k }$ , and 300k iterations, consistently across both training stages to facilitate stable convergence. Both training and evaluation were executed on an NVIDIA RTX4090 GPU. # C. Training Content The proposed VSR-HE model is optimized through supervised training on a curated dataset, as detailed below. In addition to the provided REDS dataset [30] for Track 1 and the VCD dataset [31] for Track 2, we further incorporate original video sequences from the BVI-AOM database [26] to diversify and enhance the training corpus. Representative thumbnails from the dataset are illustrated in Fig. 3. All supplemental sequences were encoded by HEVC HM 18.0 with five different quantization parameter (QP) values: 17, 22, 27, 32, 34, and 37, after downsampling. Following [18], both the degraded sequences and their corresponding highquality originals were uniformly cropped into $6 4 \times 6 4$ (compressed input) and $2 5 6 \times 2 5 6$ patches (high-resolution ground truth), respectively. These patches were randomly sampled to construct the training pairs. To further enhance data diversity and improve generalization, common augmentation techniques such as random rotations and horizontal/vertical flipping were applied. This comprehensive data preparation process resulted in approximately 100,000 patch pairs for each track. The model was trained independently on the respective datasets for Track 1 and Track 2, enabling it to effectively handle compressed video content across a broad range of QP values while maintaining robustness to various compression artifacts. TABLE I: PSNR-Y, SSIM, MS-SSIM, and VMAF results for the proposed methods and all benchmarks for both Track 1 and 2. TABLE II: Model complexity results. # III. RESULTS AND DISCUSSION Five sequences, provided by the ICME 2025 grand challenge organizer, are used to evaluate the effectiveness of the proposed coding framework. Each sequence contains up to 300 frames and is compressed with six different QPs, ranging from 17 to 37, after down-sampling. The decoded sequences were also provided by the organizer and stored in mp4 format. These sequences were first converted into YCbCr 4:4:4 format and then input into the VSR-HE model to recover to their original resolution. TABLE. I summarizes the average performance of the proposed VSR-HE method for the test sequences in terms of VMAF, SSIM, MS-SSIM and PSNR-Y for both tracks. To benchmark the performance of our proposed VSR-HE, we also test several other methods, including bicubic filter, EDSR [14], CVEGAN [18] and SwinIR [10]. According to the evaluation results, the proposed model demonstrates strong performance advantages across multiple aspects, including perceptual quality and fidelity to the original content. Visual comparisons with Bicubic/Lanczos filters are presented in Fig. 4 for Track 1 and Fig. 5 for Track 2. As shown, the proposed VSR-HE model effectively mitigates compression artifacts and reconstructs finer image details. These results highlight the model’s effectiveness in enhancing real-world compressed videos and its potential for various video enhancement applications in practical scenarios. Moreover, we also report the training time, number of parameters, FLOPs, and runtime in TABLE. II. As shown in the table, the total number of parameters of VSR-HE is 5.43M and the processing speed for each frame is $1 4 0 ~ \mathrm { m s }$ . These results offer valuable insights for the organizers to conduct an in-depth analysis and comparison of the strengths and weaknesses of each participating method.
This paper presents a general-purpose video super-resolution (VSR) method, dubbed VSR-HE, specifically designed to enhance the perceptual quality of compressed content. Targeting scenarios characterized by heavy compression, the method upscales low-resolution videos by a ratio of four, from 180p to 720p or from 270p to 1080p. VSR-HE adopts hierarchical encoding transformer blocks and has been sophisticatedly optimized to eliminate a wide range of compression artifacts commonly introduced by H.265/HEVC encoding across various quantization parameter (QP) levels. To ensure robustness and generalization, the model is trained and evaluated under diverse compression settings, allowing it to effectively restore fine-grained details and preserve visual fidelity. The proposed VSR-HE has been officially submitted to the ICME 2025 Grand Challenge on VSR for Video Conferencing (Team BVI-VSR), under both the Track 1 (General-Purpose Real-World Video Content) and Track 2 (Talking Head Videos).
[ "eess.IV", "cs.CV" ]
# 1 Introduction Large Language Models (LLMs) and agent frameworks are catalyzing a profound transformation in software engineering [63, 38, 51, 25, 28, 19, 65], significantly improving the functional correctness of their code generation and starting to rival human engineers in certain tasks [7, 58, 23]. However, this focus on correctness often overshadows another critical dimension of software quality: computational efficiency. In real-world systems, where latency and memory budgets are paramount, code that is merely correct but inefficient can precipitate severe performance bottlenecks, leading to inflated computing costs and system-wide latencies. This chasm between functional correctness and computational efficiency represents a formidable challenge to deploying automatic code generation in mission-critical tasks. This challenge has also spurred the development of code efficiency benchmarks. For instance, EffiBench [21] introduces a relative performance metric against reference solutions, while PIE4PERF [50] utilizes system simulation to meticulously assess the impact of optimizations across a vast corpus of $\mathrm { C } { + } { + }$ code pairs. Moving beyond pairwise comparisons, Mercury [14] employs percentile ranking against human solutions to highlight the efficiency disparity, and EVALPERF [36] categorizes generated solution efficiency against reference solutions. These benchmarks consistently point out that despite their prowess in generating correct code, current LLMs often produce solutions with suboptimal efficiency [49]. Initial attempts to address this gap, such as Chain-of-Thought [55] in PIE [50], self-optimization in Effilearner [20], or fine-tuning LLMs on an efficiency-oriented dataset [22], have yielded limited success, often failing to instill the adaptive knowledge for robust efficiency improvements. Preprint. Under review. Reinforcement Learning Supervised Fine-Tuning 48 + Supervised Fine-Tuning Reinforcement Learning return indices of the two numbers such that they add up to target. Problem: Given an array of integers nums and an integer target, · . def twoSum(self, nums: List[int], target: int) $\Rightarrow$ List[int]: .. out = {} : for ico,mpnluemmeinten=umtearagteet n-umnsum): if complement in out: return [out[complement], i] 55 out[num] = i def twoSum(self, nums: List[int], target: int) $$ List[int]: 50 out = [] xfor= liyeni=n(ntuarmrasgneg)te 0-, uxm,s1[)i:] if y in (nums[:i] + nums[i+1:]): 45 1 2 4 8 24 1 2 4 8 return ouotut.append(i) Number of Optimization Iterations Number of Optimization Iterations In this work, we introduce a novel iterative optimization framework (IOF) designed to enhance LLM-generated code efficiency through a closed-loop system of generation and evaluation, driven by Afterburner and Monolith. As shown in Figure 2, Afterburner takes the original code as input and generates an improved one for the subsequent optimization, where Monolith evaluates the improved code and feeds the empirical code performance back to Afterburner. The process mirrors how human developers often optimize code through trial and feedback. Our extensive experiments on the novel Venus benchmark and the widely-used APPS [18] benchmark demonstrate the varied learning dynamics of different optimization strategies within IOF. While Supervised Fine-Tuning (SFT) [30] offers initial efficiency gains in the first few iterations, it quickly saturates and struggles with sustained improvement. Direct Preference Optimization $( D P O )$ [46] consistently performs better than SFT but has the same trend as SFT. In stark contrast, Group Relative Policy Optimization (GRPO) [47] continuously refines code performance. As illustrated in Figure 1, it boosts $\operatorname { P A S S } { \ @ 1 }$ from $47 \%$ to $62 \%$ and significantly elevates all efficiency metrics, for instance, increasing BEYOND-I from $31 \%$ to $45 \%$ . We attribute these divergent behaviors to the fundamental nature of what each method tends to capture: SFT tends to capture superficial patterns from mimicking examples. DPO internalizes static preferences based on pairwise comparisons from offline data. In contrast, through online interaction with execution feedback, GRPO cultivates an adaptive proficiency in code efficiency optimization, which enables it to explore and exploit the solution space effectively within an iterative, test-time optimization process. Our key contribution not only lies in demonstrating effective test-time improvement of code efficiency but, more critically, in dissecting how different strategies contribute to this iterative optimization and highlighting the superior adaptability of online feedback-driven RL approaches in efficient-oriented code generation. # 2 Related Work LLMs for Code Generation LLMs have demonstrated remarkable progress in code generation, fueled by extensive training on vast code corpora [2, 33, 42, 38]. Building upon foundational models such as Llama [52] and Qwen [59], subsequent efforts have specialized these models for coding tasks, yielding variants like StarCoder [38], QwenCoder [27] and OpenCoder [26]. These models excel in diverse applications, including code completion [8, 30, 11], program repair [41, 37, 62], and unit test generation [24, 3]. Despite their success in generating functionally correct code, as evidenced by benchmarks like HumanEval [8], LiveCodeBench [29], and BigCodeBench [66], the computational efficiency of the generated code remains a less explored frontier. Code Efficiency Evaluation Addressing this gap, recent work has focused on quantitatively assessing the efficiency of LLM-generated code [60, 30, 44]. EffiBench [21] collects 1000 efficiency-critical Python problems, evaluating code via an efficiency ratio against reference solutions. PIE4Effi [50] emphasizes the importance of reliable measurement. It utilizes a system simulator for code execution and contributes a dataset of over $7 7 , 0 0 0 ~ \mathrm { C } + +$ efficiency preference pairs. Deviating from pairwise comparisons, EVALPERF [36] introduces Differential Performance Evaluation (DPE) on 121 performance-challenging tasks, categorizing generated solution efficiency against reference implementations. Mercury [14] measures efficiency by percentile rank against a substantial corpus of human-written solutions. More recently, ENAMEL [45] proposed an unbiased estimator $e f f @ k$ for time efficiency. These benchmarks reveal that current LLMs still significantly struggle to produce code that consistently matches expert-level computational efficiency. Building on these efforts and inspired by Mercury [14], our work introduces the Venus dataset, which expands upon existing resources with more tasks and solutions to facilitate a more rigorous efficiency assessment. Preference Alignment in Code Generation While functional correctness is paramount, code efficiency is a critical yet often overlooked preference in LLM-based code generation. Initial attempts to steer LLMs towards efficiency via prompt engineering, such as Chain-of-Thought [55] in PIE [50] or self-optimization in Effilearner [20]. Subsequent instruction tuning methods have predominantly aimed at enhancing functional correctness [39, 55, 56]. Although some recent works like SwiftCoder [22] and PIE4PERF [50] used efficiency-focused datasets for model fine-tuning, their reliance on cross-entropy loss hindered the direct instillation of nuanced efficiency preferences. To achieve finer-grained preference alignment, RL has emerged as a powerful paradigm for code preference alignment [54]. Initial methods like CodeRL [32] use code execution outcomes as feedback. More recent approaches such as StepCoder [13], RLEF [16], and Focused-DPO [61] have significantly advanced functional correctness by leveraging execution feedback. However, these RL methods have largely neglected computational efficiency as a primary optimization target, with existing execution environments typically providing only correctness-based rewards. To enable RL-based optimization for code efficiency, our work introduces Monolith, a high-fidelity sandbox that delivers real-time efficiency metrics, thereby fostering a deeper preference for performant code. # 3 Iterative Optimization Framework While current LLMs can produce viable solutions, these often fall short of the performance standards required in resource-constrained or time-sensitive applications [14, 45]. To bridge this gap, we introduce the Iterative Optimization Framework (IOF), a novel approach designed to enhance the efficiency of LLM-generated code. As illustrated in Figure 2, IOF employs a closed-loop system where code is progressively refined through cycles of forward generation and backward evaluation. Central to IOF are two synergistic components: Afterburner, a model suite that proposes targeted efficiency improvements, and Monolith, a robust code execution sandbox that provides precise, real-world performance metrics. The interplay between these components drives each optimization iteration: commencing with an original code and an efficiency instruction, Afterburner takes the inputs to generate an improved code alongside its reasoning content. This improved code is subsequently executed within Monolith, yielding empirical efficiency feedback to guide the subsequent optimization iteration. The sections detail the mechanics of Afterburner and Monolith, and the overall iterative workflow as formalized in Algorithm 1. # 3.1 Afterburner: Code Efficiency Optimization Models In the realm of aviation, an afterburner is a secondary combustion system integrated into jet engines, designed to provide a significant thrust augmentation [67]. While this surge in power comes at the cost of considerably higher fuel consumption, it serves as a critical mechanism for scenarios demanding peak performance. Drawing a parallel to this concept, our Afterburner aims to push the efficiency of LLM-generated code to the maximum. Instead of consuming more fuel, Afterburner leverages the inference-time scaling law[57] and the execution feedback from the Monolith sandbox to iteratively refine generated code. For the $i$ -th iteration, the process can be formalized as: $$ \mathcal { C } _ { i } ^ { o u t } = \tt A f t e r b u r n e r ( \mathcal { P } , \mathcal { Z } , \mathcal { C } _ { i } ^ { i n } , \mathcal { M } _ { i } ^ { i n } ) , $$ where $\mathcal { P }$ is the problem description, ${ \mathcal { T } } \in \{$ ‘time-efficient’, ‘memory-efficient’, ‘integral-efficient’ denotes a specific efficiency instruction (e.g., minimizing execution time, reducing peak memory usage, or optimizing the integral score). $\mathcal { C } _ { i } ^ { i n }$ denotes the input solution for the current iteration, and $\mathcal { M } _ { i } ^ { i n } = \mathtt { M o n o l i t h } ( \mathcal { C } _ { i } ^ { i n } )$ is its performance metric corresponding to objective $\boldsymbol { \mathcal { T } }$ . The refined candidate code $\mathcal { C } _ { i } ^ { o u t }$ is then evaluated to obtain its performance metric, $\mathcal { M } _ { i } ^ { o u t } = \mathtt { M o n o l i t h } ( \mathcal { C } _ { i } ^ { o u t } )$ . For the subsequent iteration, we select the best-performing code via a greedy approach: $$ ( \mathcal { C } _ { i + 1 } ^ { i n } , \mathcal { M } _ { i + 1 } ^ { i n } ) = \left\{ \begin{array} { l l } { ( \mathcal { C } _ { i } ^ { o u t } , \mathcal { M } _ { i } ^ { o u t } ) } & { \mathrm { i f ~ } \mathcal { M } _ { i } ^ { o u t } \succ \mathcal { M } _ { i } ^ { i n } \hfill , } \\ { ( \mathcal { C } _ { i } ^ { i n } , \mathcal { M } _ { i } ^ { i n } ) } & { \mathrm { o t h e r w i s e } } \end{array} \right. $$ Figure 2: Inference Workflow of the Iterative Optimization Framework (IOF). In the forward generation (blue lines), Afterburner takes a problem description, efficiency instruction, original code (optional), and original performance as input. It then produces reasoning content and improved code in a designated format. For the backward evaluation (green lines), the original code and original performance are updated with the improved versions. The detailed pipeline is defined in Algorithm 1. where $\mathcal { M } _ { i } ^ { o u t } \succ \mathcal { M } _ { i } ^ { i n }$ indicates that the performance of $\mathcal { C } _ { i } ^ { o u t }$ is superior to that of $\mathcal { C } _ { i } ^ { i n }$ with respect to the objective $\mathcal { T }$ . The iterative process continues for a predetermined number of iterations $N _ { i t e r }$ . # 3.2 Monolith: Code Execution Sandbox Monolith is a catalyst of IOF, which executes generated code and provides the empirical performance feedback to the iterative optimization. Since the efficacy of RL and preference optimization methods hinges on the quality and consistency of the feedback signal [47, 16], Monolith prioritizes the consistent measurement in its design. While theoretical complexity analysis (e.g., Big $O$ notation) offers high-level insights into algorithmic scalability [10], it often fails to capture the nuances of real-world performance. A return signal without discrimination may cause the optimization algorithm to lose the optimization gradient [17]. Moreover, Constant factors, specific implementation details (such as language runtime, library choices, and compiler optimizations), and hardware interactions (CPU architecture, memory hierarchy) significantly influence actual execution time and memory consumption [36]. Therefore, for the Afterburner models to learn to generate genuinely efficient code, they require empirical metrics from Monolith that reflect these practical realities. $$ \{ p a s s e d , t i m e , m e m o r y , i n t e g r a l \} = \mathbb { M } \mathrm { o n o l i t h } ( c o d e , t e s t \_ c a s e s ) , $$ where code is the code, paassed is a boolean value indicating whether code is passed all test cases. time, memory, and integral denote the absolute execution time, peak memory usage, and the integral score of the code, respectively. We will explain how to measure these metrics in Section 5. # 4 Code Efficiency Optimization Data Preparation Recent initiatives like Mercury [14], EffiBench [21], and EVALPERF [36] have made important strides in evaluating code efficiency (see Table 5), but persistent limitations remain. To address these shortcomings, while also building upon these foundational efforts, we introduce Venus, a new dataset designed for rigorous code efficiency assessment: (1) Inspired by Mercury [14] and EVALPERF [36], it computes percentile ranks against a diverse distribution of reference solutions, unlike methods relying on single, potentially biased baselines [21, 50]. (2) Venus provides a substantially larger set of solutions, averaging 106.6 per task by expanding upon EffiBench [21] and Mercury [14]. This is a significant increase from the fewer than 20 solutions found in existing Python efficiency benchmarks as listed in Table 5, ensuring more stable and reliable percentile calculations. (3) It offers a holistic assessment by evaluating execution time, memory usage, and their combined impact. As shown in Table 7, Venus Python set includes 2,181 training and 300 test tasks. From this data, we derived training subsets for various optimization methods: • SFT Dataset. For Supervised Fine-Tuning, ${ D S } _ { S F T }$ is constructed by sampling pairs of functionally correct solutions for tasks from $\mathtt { V e n u s } _ { t r a i n }$ , where the solution exhibiting inferior computational efficiency is designated $\displaystyle { \mathcal { C } } ^ { - }$ and the superior one is $\mathcal { C } ^ { + }$ . $D S _ { S F T }$ comprises 58,833 training instances, with 19,611 instances generated for each of the three targeted efficiency instructions. • DPO Dataset. Each instance in the preference dataset $D S _ { D P O }$ consists of a prompt $( \mathcal { P } , \mathcal { T } , \mathcal { C } , \mathcal { M } )$ and a pair of responses $( \mathcal { C } ^ { + } , \mathcal { C } ^ { - } )$ , where we randomly sample three solutions from $\mathtt { V e n u s } _ { t r a i n }$ , assigning the best code as ${ \mathcal { C } } ^ { + }$ and worst as $\displaystyle { \mathcal { C } } ^ { - }$ , and the mediocre $\mathcal { C } ^ { b a s e l i n e }$ as the baseline, according to their efficiency performance $\mathcal { M }$ with respect to the objective $\boldsymbol { \mathcal { T } }$ . Averaging approximately 13.3K instances per efficiency instruction type, $D S _ { D P O }$ contains 90,864 training instances. • Cold Start Dataset. This dataset is designed to rapidly adapt Afterburner models to the expected response format. $D S _ { C O L D }$ is constructed using tasks from $\mathtt { V e n u s } _ { t r a i n }$ , for which initial responses were generated by Gemini $2 . 5 ~ P r o$ . From an initial collection of 3,392 raw responses with the ‘<thinking><solution>’ format, we filter and construct $D S _ { C O L D }$ with 2,071 instances. • GRPO Dataset. Since AfterburnerGRP O learns from code execution feedback, the $D S _ { G R P O }$ training dataset does not require ground-truth responses. Each instance herein is a prompt structured as $( \mathcal { P } , \mathcal { \bar { Z } } , \mathcal { C } , \mathcal { M } )$ . $D S _ { G R P O }$ employs all 984 distinct tasks in $\mathtt { V e n u s } _ { t r a i n }$ . Supervised Fine-Tuning SFT is the most intuitive approach to imbue LLMs with an initial understanding of code efficiency. Its core idea is to expose the model to the inefficient code paired with the optimized code, thereby teaching it to learn the patterns that transform suboptimal solutions into more performant ones. The Afterburner ${ _ { S F T } }$ takes a prompt $\mathcal { X } = ( \mathcal { P } , \mathcal { Z } , \mathcal { C } ^ { - } , \mathcal { M } ^ { - } )$ , and the training objective is to minimize the cross-entropy loss for generating the expected response $\mathcal { C } ^ { + }$ : $$ \begin{array} { r } { \mathcal { L } _ { S F T } ( \pi _ { \theta } ) = - \mathbb { E } _ { ( \mathcal { P } , \mathcal { Z } , \mathcal { C } ^ { + } , \mathcal { C } ^ { - } , \mathcal { M } ^ { - } ) \sim D S _ { S F T } } \left[ \log \pi _ { \theta } ( \mathcal { C } ^ { + } | \mathcal { X } ) \right] , } \end{array} $$ where $\pi _ { \theta } ( \mathcal { C } ^ { + } | \mathcal { X } )$ is the likelihood of generating the optimized code $\mathcal { C } ^ { + }$ given the prompt $\chi$ . It impels LLMs to learn the mapping from inefficient code to their more efficient counterparts. Direct Preference Optimization While SFT provides a strong baseline, DPO offers a more direct way to align LLMs with efficiency preferences offline, without the need for explicit sampling from a reference model during the training. DPO directly increases the likelihood of positive responses ${ \mathcal { C } } ^ { + }$ and decreases that of negative ones $\displaystyle { \mathcal { C } } ^ { - }$ , thereby tuning the model to inherently generate more efficient code according to the specified efficiency objective $\boldsymbol { \mathcal { T } }$ . Its key advantage is directly optimizing for the preference objective. The Afterburner $_ { D P O }$ loss function is formulated as: $$ \begin{array} { r } { \mathcal { L } _ { D P O } ( \pi _ { \theta } ; \pi _ { r e f } ) = - \mathbb { E } _ { ( \chi , c ^ { + } , c ^ { - } ) \sim D S _ { D P O } } \left[ \log \sigma \left( \beta \log \frac { \pi _ { \theta } ( C ^ { + } | \mathcal { X } ) } { \pi _ { r e f } ( C ^ { + } | \mathcal { X } ) } - \beta \log \frac { \pi _ { \theta } ( C ^ { - } | \mathcal { X } ) } { \pi _ { r e f } ( C ^ { - } | \mathcal { X } ) } \right) \right] , } \end{array} $$ where $\pi _ { \boldsymbol { \theta } }$ is the target model, $\pi _ { r e f }$ is a reference model (we use the above Afterburner $S F T$ model as the reference). $\mathcal { X } = ( \mathcal { P } , \mathcal { I } , \mathcal { C } ^ { b a s e l i n e } , \mathcal { M } )$ is the input prompt. $\beta$ is a hyperparameter controlling the deviation from the reference model, and $\sigma$ is the logistic function. Group Relative Policy Optimization Building upon the principles of preference-based learning, GRPO [47] extends the pairwise offline comparison of DPO to a group-wise online ranking scenario. For a given prompt, GRPO generates multiple roll-outs and learns the relative advantage amongst these roll-outs. Inspired by recent works [17, 16], we explore whether it can enhance the code efficiency. As depicted in Figure 4, we first SFT the base model on $D S _ { C O L D }$ to align it quickly with the designated response format, thereby providing a well-aligned foundation for AfterburnerGRP O. Reward Functions. We encourage AfterburnerGRP O to think about how to improve the efficiency before generating correct and efficient code. Therefore, the reward function comprises three parts: format control reward, functional correctness reward, and computational efficiency reward: • Format Control Reward. This reward component encourages the model to structure its output in a predefined format. Specifically, Afterburner models are expected to have a thinking phase encapsulated in <thinking>...</thinking> tags, followed by the actual code within <solution>...</solution> tags. Eq. (6) defines the reward as 1 when the model response matches the regex pattern (See Appendix E.5), otherwise, the reward will be -1. $$ R _ { F o r m a t } ( C ^ { o u t } ) = \left\{ \begin{array} { l l } { { 1 } } & { { \mathrm { i f ~ } C ^ { o u t } \mathrm { ~ m a t c h e s ~ t h e ~ p a t t e r n } } } \\ { { - 1 } } & { { \mathrm { o t h e r w i s e } } } \end{array} \right. $$ • Functional Correctness Reward. Ensuring the generated code is functionally sound is paramount. We define a boolean $\mathcal { A } = \mathsf { M o n o l i t h } ( C , t e s t \_ c a s e s )$ to indicate whether the provided code $C$ passes all test cases, where test_cases is a set of test cases. $R _ { c o r r e c t }$ is defined as: $$ R _ { c o r r e c t } ( C ^ { i n } , C ^ { o u t } ) = \left\{ \begin{array} { l l } { 1 . 0 } & { \mathrm { i f ~ } \mathcal { A } ^ { o u t } = 1 \mathrm { ~ a n d ~ } \mathcal { A } ^ { i n } = 0 \mathrm { ~ ( u p g r a d e ) } } \\ { 0 . 5 } & { \mathrm { i f ~ } \mathcal { A } ^ { o u t } = 1 \mathrm { ~ a n d ~ } \mathcal { A } ^ { i n } = 1 \mathrm { ~ ( m a i n t a i n e d ~ p a s i n g ~ s t a t u s ) } } \\ { - 0 . 5 } & { \mathrm { i f ~ } \mathcal { A } ^ { o u t } = 0 \mathrm { ~ a n d ~ } \mathcal { A } ^ { i n } = 0 \mathrm { ~ ( m a i n t a i n e d ~ f a i l i n g ~ s t a t u s ) } } \\ { - 1 . 0 } & { \mathrm { i f ~ } \mathcal { A } ^ { o u t } = 0 \mathrm { ~ a n d ~ } \mathcal { A } ^ { i n } = 1 \mathrm { ~ ( d o w n g r a d e ) } } \end{array} \right. $$ • Efficiency Improvement Reward. Given the efficiency instruction $\boldsymbol { \mathcal { T } }$ , this reward measures the relative improvement in the corresponding performance metric $\mathcal { E } \in \{ t i m e , m e m o r y , i n t e g r a l \}$ of a roll-out code compared to the baseline input code. Here, $\mathcal { E } = \mathtt { M o n o l i t h } ( C , t e s t \_ c a s e s )$ and $\mathcal { E } _ { u p p e r }$ are the absolute performance value and the upper limitation with respect to $\mathcal { T }$ , respectively. $$ \mathcal { R } _ { e f f i c i e n c y } = \mathrm { t a n h } ( \mathcal { E } _ { g a i n } ) , \quad \mathcal { E } _ { \mathrm { g a i n } } = \frac { \mathcal { E } _ { c i l p } ^ { \ i n } - \mathcal { E } _ { c i l p } ^ { \ o u t } } { \mathcal { E } _ { c i l p } ^ { \ i n } + \epsilon } , \quad \mathcal { E } _ { c l i p } = \mathrm { c l i p } ( \mathcal { E } , 0 , \mathcal { E } _ { u p p e r } ) , $$ • Final Reward. We apply an additive reward to combine all rewards comprehensively. $\beta _ { f } , \beta _ { e }$ , and $\beta _ { c }$ are weight hyperparameters to each corresponding reward competent. $$ \mathcal { R } _ { f i n a l } = \beta _ { f } \cdot \mathcal { R } _ { f o r m a t } + \beta _ { c } \cdot \mathcal { R } _ { c o r r e c t } + \beta _ { e } \cdot \mathcal { R } _ { e f f i c i e n c y } $$ Objective. GRPO leverages a policy gradient approach to optimize the target policy $\pi _ { \boldsymbol { \theta } }$ based on the old one $\pi _ { \theta _ { \mathrm { o l d } } }$ . The training objective encourages the policy to favor generated candidates that not only possess high intrinsic quality but also demonstrate superior performance relative to their peers within the same generation group for a given prompt. This objective is formalized as: $$ \mathcal { L } _ { G R P O } ( \pi _ { \theta } ; \pi _ { \theta _ { \mathrm { o l d } } } ) = - \mathbb { E } _ { \chi \sim D S _ { G R P O } , \{ \mathcal { O } _ { i } \} _ { i = 1 } ^ { G } \sim \pi _ { \theta _ { \mathrm { o l d } } } ( \mathcal { O } _ { i } | \mathcal { X } ) } \left[ \operatorname* { m i n } ( \mathcal { W } _ { i } , \mathrm { c l i p } ( \mathcal { W } _ { i } , 1 + \epsilon , 1 - \epsilon ) \cdot \mathcal { A } _ { i } ) \right] , $$ $$ \begin{array} { r } { \mathcal { X } = ( \mathcal { P } , \mathcal { Z } , \mathcal { C } ) , \quad \mathcal { W } _ { i } = \frac { \pi _ { \theta } ( \mathcal { O } _ { i } \mid \mathcal { X } ) } { \pi _ { \theta _ { \mathrm { o l d } } } ( \mathcal { O } _ { i } \mid \mathcal { X } ) } , \quad \mathcal { A } _ { i } = \frac { \mathcal { R } _ { i } - \mathrm { m e a n } ( \{ \mathcal { R } _ { i } \} _ { i = 1 } ^ { G } ) } { \mathrm { s t d } ( \{ \mathcal { R } _ { i } \} _ { i = 1 } ^ { G } ) } , } \end{array} $$ where $\chi$ is the input prompt, $\{ O _ { i } \} _ { i = 1 } ^ { G }$ is the roll-out group with the size $G . \mathcal { W } _ { i }$ denotes the policy ratio comparing how the new policy $\pi _ { \boldsymbol { \theta } }$ prefer a generation against the old policy $\pi _ { \theta _ { \mathrm { o l d } } }$ . To prevent drastic $\mathcal { W } _ { i }$ updates, we clip the ratio within the interval $[ 1 - \epsilon , \bar { 1 } + \epsilon ]$ . Finally, $\mathbf { \mathcal { A } } _ { i }$ is computed on the reward score of each roll-out $\mathcal { R } _ { i }$ , to show the relative advantage in the same roll-out group. # 5 Experiment Setup Dataset Recipe Venus Python subset contains 2,181 algorithmic problems, each accompanied by a validated test case generator and an average of $\overline { { I O 6 . 6 } }$ human solutions, enabling robust empirical analysis of code efficiency beyond functional correctness. Based on Venus, Section 4 introduces several datasets for Afterburner training, including $D S _ { S F T }$ , $D S _ { D P O }$ , $D S _ { C O L D }$ , and $D S _ { G R P O }$ . APPS is a widely recognized benchmark for evaluating the functional correctness of code generation models [18]. While its original design, with 21.2 test cases and 23.4 solutions per problem, focuses on correctness, we integrate it into our efficiency evaluation pipeline as an auxiliary benchmark (see Appendix C). Functional Correctness Ensuring functional correctness is a prerequisite for code generation models. Following the evaluation paradigm in Codex [9], we employ the $\begin{array} { r l } { \mathbf { P } \mathbf { A } \mathbf { S } \mathbf { S } \boldsymbol { @ } \boldsymbol { 1 } } & { { } = } \end{array}$ $N _ { p a s s e d } / N _ { t o t a l }$ score to assess the global functional correctness, where $N _ { p a s s e d }$ is the number of passed generations and $N _ { t o t a l }$ is the total number of test tasks. $$ \begin{array} { r } { \mathrm { P R } ( x , D ) \ = \ \frac { 1 } { | D | } \sum _ { d \in D } { \mathbf { 1 } } [ d \geq x ] . } \end{array} $$ Figure 3: Illustration of task-level efficiency metrics. $$ \begin{array} { r } { 3 \mathrm { E Y O N D - \{ T , ~ M , ~ I \} } = \frac { \sum _ { k = 1 } ^ { | V | } \mathrm { P R } ( \mathcal { E } _ { k } ^ { g e n } , \{ D _ { k } ^ { T } , D _ { k } ^ { M } , D _ { k } ^ { I } \} ) } { | V _ { t e s t } | } , } \end{array} $$ Computational Efficiency Following Mercury [14] and EffiBench [21], we avoid employing absolute efficiency metrics because they are highly sensitive to hardware configurations and operating systems. For each task in Venus test set $v _ { k } \in V _ { t e s t }$ , we instead compute percentile ranks of an absolute performance $\mathcal { E } _ { k } ^ { g e n }$ relative to the distribution $D _ { k }$ collected from corresponding reference solutions $S _ { k }$ . Except the execution time $( r _ { k } ^ { g e n } )$ and peak memory $( \operatorname* { m a x } ( m _ { k } ^ { g e n } ( t ) ) )$ , we also consider using the integral score $\begin{array} { r } { i _ { k } ^ { g e n } = \int _ { t = 0 } ^ { r _ { k } ^ { g e n } } m _ { k } ^ { g e n } ( t ) \mathrm { d } { \underline { { t } } } } \end{array}$ as a comprehensive efficiency metric, where $m _ { k } ^ { g e n } ( t )$ is the instantaneous memory footprint at time $t$ . To compute relative efficiency metrics, we establish reference distributions of execution time overhead $D _ { k } ^ { T } = \{ r _ { k } ^ { n } \} _ { n = 1 } ^ { | S _ { k } | }$ , memory overhead $D _ { k } ^ { M } = \{ m _ { k } ^ { n } \} _ { n = 1 } ^ { | S _ { k } | }$ , and integral efficiency DIk = {ikn}|nS=k|1, where rkn, mkn, and ikn are the absolute execution time, memory usage, and integral score of the $n$ -th collected solution $s _ { k } ^ { n } \in S _ { k }$ , respectively. Based on these distributions, we can calculate the task-level efficiency percentile-rank of the generated solution in Eq. (12). The global efficiency metrics are computed as the average of all task-level percentile-ranks in Eq. (13). Higher scores indicate that the generated code outperforms a larger fraction of the reference solutions, reflecting stronger code efficiency. Implementation Details Afterburner models are trained on a single node with eight H100 GPUs. We utilized Llama-Factory [64] for SFT and DPO training phases, and Verl [48] for GRPO training. Dataset construction details can be found in Section 5. For inference acceleration, we use vLLM [31]. Comprehensive details regarding the training pipeline (as shown in Figure 4) and hyperparameters are provided in the Appendix E. Monolith configuration can be found in Appendx H. DSSPT Afterburner SFT DSDPO Afterburner DPO Base Model DScs Afterburnercs DSGRPO AfterburnerGRPO # 6 Discussion and Key Takeaways # 6.1 How about the Code Efficiency Performance of Vanilla LLMs? Our baseline evaluation of diverse LLMs on the Venus and APPS benchmarks (Tables 1 and 9) reveals a critical performance limitation: Despite achieving high functional correctness $( \mathbf { P A S S } @ 1 )$ , vanilla models generate code with strikingly inferior computational efficiency compared to human solutions [36, 45]. For example, OpenAI o4 mini, a top-performing model with $8 9 . 1 1 \%$ PASS $\ @ 1$ on Venus, produces code whose runtime efficiency (BEYOND-T) surpasses only $56 . 8 5 \%$ of human solutions (and merely $4 0 . 0 7 \%$ on APPS), with similar disparities observed for other leading models and across all efficiency metrics. While stronger (bigger) models exhibit marginally better code efficiency, this is insufficient to overcome the fundamental gap. This pervasive efficiency deficit in LLM-generated code clearly motivates the development of dedicated optimization frameworks, such as Afterburner, to enhance code generation in real-world applications. # 6.2 Does Iterative Improvement Framework Work? The foundational hypothesis of the Afterburner framework is that iterative refinement, driven by execution feedback, can progressively enhance code efficiency. This section investigates the effectiveness of such iterative self-optimization and how the choice of underlying optimization strategy impacts learning dynamics and outcomes across successive iterations. Notably, the prompt placeholder original_code is left empty for the initial code generation (see Section F.3). • SFT Memorized Superficial Patterns. SFT primarily learns to mimic transformations from less to more efficient code based on its training data. In the model training phase, AfterburnerSF T updates these learned patterns. Initial gains are possible if the input code matches known suboptimal patterns. However, SFT’s capacity to generalize to novel inefficiencies or explore fundamentally different algorithmic solutions is inherently limited, as it lacks a deep understanding of why a pattern is efficient beyond its training data co-occurrence. Consequently, as seen in Figure 5, SFT often quickly exhausts its applicable patterns in iterative optimization. Table 1: Comparison of Vanilla Efficiency Performance between Open-Source and Closed-Source Models on the Venus Benchmark. Parentheses denote $9 5 \%$ CI. The top score for each metric is highlighted in bold. Afterburner uses ‘both time and memory efficient’ instruction in the generation. Pass@1 v5.8.s1.7%Ite5r9.a3t4i%on6s0.50% 61.18% 61.67 61.67 Beyo40.n83d%_T 2v.35s%. It4e3.r4a9%tio4n4.s3 45.17% 60 514.50% 40 38.41% 50 512.3030% 51.67% 51.67% 51.67% 51.67% 51.67% 51.67% 51.67% 36.69% 463.00% 50.00% .00% 46.33% 47.00% 48.33% 48.33% 48.67% 48.67% 48.67% 48.67% 48.67% 31.2 26.43% 27.93% 28.03% 28.47% 29.04% 29.46% 29.90% 30.42% 30.90% 30.92% 24.61% . + 37.33% 26.00% 26.40% 26.55% 26.70% 26.72% 26.74% 26.76% 26.78% 26.78% 26.78% 25.04% 30 27.99% 28.33% 28.67% 28.67% . 29.00% 29.08% 29.17% 29.24% 29.33% 29.33% 29.33 12.04% 12.74% 13.08% 13.25% 13.42% 13.50% 13.59% 13.68% 13.69% 13.69% 13.71% 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Beyond_M v.s. Iteratio4n6.s22% 47.65% 48.05% 48.05 40 Beyond_I v.s. Ite37r.0a9t%ion38s.01% 38.62% 38.95% 38.96 + · 2340 2 40.72% 42.55% 35 33.56% 35.48% 34.17% 37.45% 30 26.25% 279.0454% 2370.4825% 27.95% 28.11% 28.51% 29.51% 29.51% 29.51% 29.51% 5.14% 2340.291% 26.31% 26.74% 27.17% 27.40% 27.67% 27.68% 27.68% 27.68% 27.69% 25 2119.0113% 21.09% 21.50% 21.88% 22.25% 22.31% 22.38% 22.44% 22.50% 22.50% 22.50% 30.8573% 123.7830% 124.220% 124.4565% 124.7910% 1245.8040% 1245.9170% 125.109% 125.129% 25.30% 15.19% 125.1390% 105 16.24% 10.77% 11.25% 11.49% 11.72% 11.83% 11.96% 12.07% 12.19% 12.19% 12.19% 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Base Model Afterburner SFT Afterburner DPO + Afterburner GRPO • DPO Realized Static Preferences. DPO internalizes preferences for more efficient solutions from ranked pairs. This allows Afterburner $_ { D P O }$ to make more nuanced judgments than SFT, guided by characteristics correlated with better performance under the objective $\boldsymbol { \mathcal { T } }$ . Iteratively, DPO can steer code towards these preferred traits. However, since DPO is typically an offline method, it does not learn from its own generations without retraining. Thus, its exploration is still bounded by the diversity of its initial preference dataset. Figure 5 shows DPO may offer more consistent improvement than SFT, but also tends to plateau once its learned preferences are fully exploited. • GRPO Cultivated Adaptive Proficiency. GRPO utilizes an online reinforcement learning approach. In the training phase, AfterburnerGRP O generates multiple candidates, which are evaluated by Monolith. The resultant empirical feedback directly updates the policy $\pi _ { \boldsymbol { \theta } }$ to favor strategies yielding more efficient code for objective $\boldsymbol { \mathcal { T } }$ . This online learning is pivotal for iterative self-improving optimization. Rather than merely static patterns or preferences, GRPO develops a deeper proficiency in code optimization. By actively exploring the solution space and receiving direct feedback, AfterburnerGRP O continuously refines its generation strategy, adapts to problem-specific nuances, and uncovers sophisticated optimization policy over iterations. The group-wise ranking further enhances its fine-grained understanding of relative efficiencies. This adaptive capability, evident in Figure 5, allows GRPO to achieve sustained and superior performance improvements, continually pushing its optimization boundaries. Table 2: Performance of Afterburner models at Iteration 4 with removing execution feedback and original code input, respectively. Bracketed values represent the change in performance compared to the baseline: red indicates degradation, and green indicates improvement. Table 3: Model vs. Human on Venus. Bold indicates the top performance per column and model category. $B \%$ , $M \%$ , $W \%$ , and $F \%$ denote percentages of solutions: Better than all human, Within mediocre human range, Worse than all human, or Failed to pass all test cases, respectively. # 6.3 Why GRPO Can Iteratively Enhance Code Efficiency? Generation diversity is foundational to its iterative capability. By unleashing the KL divergence restriction in the training phase, AfterburnerGRP O inherently explores multiple potential optimization pathways without the ground-truth. This diversity ensures that AfterburnerGRP O is not confined to local optima. Moreover, GRPO gains experience improving code from what it generated through the iterative refinement loop. It does not just generate code, but executes it to gather concrete feedback on its real-world performance, effectively learning from its successes and failures in a continuous cycle. As the model identifies more efficient code structures in training, it becomes progressively better at producing them in inference. Ablation studies (Table 2) confirm that removing the feedback mechanism or original code context significantly diminishes AfterburnerGRP O performance, an effect not always as evident in AfterburnerSF T or AfterburnerDP O. # 6.4 Can Afterburner Generate Code Surpassing Human Efficiency? While LLMs excel at generating functionally correct code, often by imitating human-written examples in their training data, a key question remains: Can they produce solutions exceeding the code efficiency of this best human-written code? To investigate this, we compare the efficiency of model-generated code against human solutions from Venus. As presented in Table 3, reasoning models such as $Q w Q 3 2 B$ and OpenAI $o 4$ -mini exhibit a higher ability to occasionally generate superhuman solutions. Crucially, our proposed AfterburnerGRP O yields the highest $B \%$ scores across all evaluated metrics after 8 iterations: TIME $( 8 . 0 0 \% )$ , MEMORY $( 7 . 0 0 \% )$ , and INTEGRAL $( 5 . 3 3 \% )$ . This demonstrates that AfterburnerGRP O moves beyond merely replicating common patterns observed during pre-training. By actively exploring the solution space through RL, it discovers highly optimized implementations that are often structurally different from canonical human approaches. However, this enhanced exploration entails a trade-off: AfterburnerGRP O also generates a larger fraction of solutions that are less efficient than the human baseline.
Large Language Models (LLMs) generate functionally correct solutions but often fall short in code efficiency, a critical bottleneck for real-world deployment. In this paper, we introduce a novel test-time iterative optimization framework to address this, employing a closed-loop system where LLMs iteratively refine code based on empirical performance feedback from an execution sandbox. We explore three training strategies: Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Group Relative Policy Optimization (GRPO). Experiments on our Venus dataset and the APPS benchmark show that SFT and DPO rapidly saturate in efficiency gains. In contrast, GRPO, using reinforcement learning (RL) with execution feedback, continuously optimizes code performance, significantly boosting both pass@1 (from 47% to 62%) and the likelihood of outperforming human submissions in efficiency (from 31% to 45%). Our work demonstrates effective test-time code efficiency improvement and critically reveals the power of RL in teaching LLMs to truly self-improve code efficiency.
[ "cs.SE", "cs.AI" ]
# Introduction The development of large language models tailored to the field of Traditional Chinese Medicine (TCM) [1,2] has emerged as a significant research direction. Given the unique and intricate nature of the TCM knowledge system, the construction of intelligent tools specifically designed for this domain can substantially enhance the efficiency of medical students, clinicians, and researchers. Such models have the potential to facilitate accurate and timely access to specialized information for clinical decision-making, knowledge retrieval, and academic inquiry, thereby supporting effective reasoning and practical application within the TCM framework. TCM diagnostic methods including inspection, auscultation and olfaction, inquiry, and palpation embody a representative process of multimodal information acquisition, integration, and reasoning [3]. Fundamentally, this diagnostic paradigm reflects the nature of multimodal fusion in clinical decision-making. However, existing large language models (LLMs) tailored for TCM still face notable limitations in real-world applications. These limitations are primarily manifested in their relatively small model scales, insufficient reasoning capacity, and the lack of deep integration of multimodal information. The acquisition of high-quality TCM data poses significant challenges, as it requires deep expertise in traditional medicine, sustained clinical data collection, and extensive manual annotation. Currently, most mainstream medical benchmark datasets [4–8] are predominantly focused on Western medicine and have yet to systematically address the core tasks unique to TCM, including syndrome differentiation, symptom-based diagnosis, and formula-herb matching. Furthermore, the training and evaluation of existing TCM large language models remain heavily reliant on unimodal textual data, neglecting other essential modalities that are widely utilized in clinical practice. These include diagnostic images (e.g., tongue and pulse), medicinal herb atlases, and structured case records. Such an overdependence on textual data severely constrains the models' ability to capture the holistic and multimodal nature of TCM knowledge, thereby impeding their performance in complex and realistic clinical scenarios. Therefore, the construction of a standardized evaluation dataset for TCM that integrates text, images, audio, and structured data is of great importance. On one hand, such a dataset would enable a comprehensive and accurate assessment of existing LLMs in handling complex multimodal tasks, thereby providing a realistic reflection of their overall performance in clinical applications. On the other hand, a unified and standardized evaluation framework would facilitate fair and objective comparisons across different TCM-specific models, supporting continuous optimization and iterative improvement of model capabilities. To address the aforementioned gaps, we propose TCM-Ladder, which, to the best of our knowledge, is the first large-scale multimodal dataset specifically designed for the training and evaluation of large language models in TCM. TCM-Ladder encompasses a wide spectrum of domain-specific knowledge, including fundamental TCM theories, diagnostics, formulae, pharmacology, clinical medicine, as well as visual modalities such as tongue images, herbal medicine illustrations, acupuncture, and tuina (therapeutic massage), thereby offering a comprehensive foundation for developing and benchmarking TCM-specific LLMs. Visual Question Answering Single-choice question Which one is Mentha canadensis? Whichtongueapearanceismostlikely with Qi-deficiency constitution? In the Four Diagnostic Methods of TCM, "Palpation (Qie Zhen)" primarily refers to: 0 A. Observing the patient's appearance °三 A B A B B. Listening to sounds and smelling odors C. Asking about medicalhistoryandsyptoms : 2 D. Understanding the condion through pulse D.通过脉象和按压了解病情 and palpation √正确答案:D C D C D Correct answer: D √ Correct answer: C Correct answer: B Multiple-choice question : TCM-Ladder Diagnostic dialogue s Q: Can patients with iver abscess cat Chinese yam (Dioscorea opposia)? B. Fatigue and lack of strngth A: Patients with liver abscess can eat Chinese yam.A light,soft,and C. Weak and thready pulse easily digestible diet is generally recommended for individuals with liver D. Thirst with a preference for drinking fluids Correct Answers: A, B,C 正确答案:A、B、C abscess. Chinese yam is rich in nutrients such as protein and starch,and . Fill-in-the-blank question 酉 Other resources 1.Tongue diagnosis is an important component of Tuina and acupuncture videos ? Voice and disease audio D responsible for storing blood 4. A typical appetite characteristic of Stomach Yin Cough Breath Speech Voice fi Pulse > Pulse Figure 1. Overview of the architectural composition of TCM-Ladder. TCM-Ladder encompasses six task types aimed at evaluating the comprehensive capabilities of large language models in Traditional Chinese Medicine. These include: (1) single-choice questions, which assess basic knowledge recognition; (2) multiple-choice questions, designed to test the model’s ability to integrate and reason over complex concepts; (3) long-form diagnostic question answering, which evaluates clinical reasoning based on detailed symptom descriptions and patient inquiries; (4) fillin-the-blank tasks, which measure generative accuracy and contextual understanding without the aid of answer options; (5) image-based comprehension tasks, involving the interpretation of medicinal herb and tongue images to assess multimodal reasoning across visual and textual inputs; and (6) additional audio and video resources, such as diagnostic sounds, pulse recordings, and tuina (massage) videos, which support the development and evaluation of multimodal TCM models incorporating auditory and dynamic visual data. As illustrated in Figure 1, based on the TCM-Ladder dataset, we design a series of evaluation tasks to comprehensively assess the capabilities of TCM-specific large language models across multiple dimensions. We constructed a total of 21,326 high-quality questions and 25,163 diagnostic long-text dialogues based on domain-specific literature and publicly available databases across various subfields of TCM. In addition, we release a visual dataset comprising 6,061 images of medicinal herbs, 1,394 tongue images, 6,420 audio clips, and 49 videos, forming a comprehensive multimodal foundation to support diverse evaluation tasks. All textual and visual data were independently reviewed and validated by certified TCM practitioners to ensure accuracy, clinical relevance, and authoritative quality. Subsequently, we benchmarked the performance of 9 state-of-the-art general domain LLMs[9–17] and 5 TCM-specific models[18– 20] using the TCM-Ladder dataset. Additionally, we fine-tuned a GPT-4-based model, Bencao [21], and trained a Qwen2.5-7B [22] based reasoning model, which uses a training subset constructed from TCM-Ladder to support TCM-specific reasoning tasks. Our contributions can be summarized as follows: 1. We construct TCM-Ladder, a multimodal dataset designed for both training and evaluating TCM-specific and general domain LLMs. The dataset encompasses multiple TCM sub-disciplines and a variety of data modalities. 2. We design a comprehensive set of tasks including single-choice questions, multiplechoice questions, fill-in-the-blank tasks, visual understanding tasks, and long-form question answering to evaluate models' reasoning abilities across different tasks. 3. We introduce Ladder-Score, an evaluation metric that integrates TCM-specific terminology and LLM-assisted semantic scoring to assess term accuracy and reasoning quality in TCM question answering. 4. We systematically evaluate the performance of several general domain and TCM-specific LLMs on TCM-Ladder. To our knowledge, this is the first work to conduct a comparative evaluation of diverse LLMs on a unified multimodal TCM dataset. 5. We develop an interactive data visualization website that not only presents evaluation results, but also allows researchers to explore existing data and contribute new entries, thereby providing a standardized, extensible, and multimodal infrastructure for future benchmarking of TCM-specific LLMs. # 2. Related Works In recent years, the expanding application of LLMs in medicine and the sciences has driven the progressive development of evaluation datasets tailored for TCM, evolving from modern medical domains to TCM-specific tasks, and from classification-based to generation-based paradigms. Huatuo- ${ \bf \cdot } 2 6 { \bf M ^ { 4 } }$ , released in 2020, remains the largest Chinese medical QA dataset, comprising over 26 million question–answer pairs sourced from online encyclopedias, medical knowledge bases, and telemedicine transcripts. Despite its scale, the dataset suffers from noisy labels, informal expressions, redundancy, and a lack of TCM-specific annotations, limiting its utility for TCM applications. CBLUE⁵ introduced a standardized multi-task evaluation suite for Chinese biomedical NLP, covering named entity recognition, relation extraction,etc. PromptCBLUE⁶ extended this framework via instruction tuning and prompt reformulation to facilitate few-shot and zero-shot evaluation. However, both benchmarks were designed around modern medical reasoning and do not reflect the unique logic or semantic structure of TCM diagnosis. To address these gaps, TCMBench⁷ compiled 5,473 structured questions from national TCM licensing examinations, providing a focused benchmark for foundational knowledge assessment. Nevertheless, it lacks multimodal input (e.g., tongue and pulse images) and real-world diagnostic reasoning tasks. TCMEval-SDT⁸ introduced syndrome differentiation based on 300 clinical cases, evaluating the model's reasoning over symptom–pathomechanism–syndrome chains. While it improved interpretability, its scale and disease diversity remain limited. Subsequently, TCM-3CEval⁹ proposed a cognitive three-axis framework—basic knowledge, classical text comprehension, and clinical decision-making—enabling fine-grained cognitive evaluation. However, tasks were still text-only and often reduced the complexity of classical TCM literature to overly simplistic answers. TCMD¹⁰ presented a human-annotated open-ended QA benchmark emphasizing reasoning and generation, though annotation costs limited its scale and case diversity. ShenNong_TCM_Dataset¹¹ adopted a novel approach, combining knowledge graphs with ChatGPT-based generation to create $1 1 0 { , } 0 0 0 { + }$ instruction–response pairs on herbal medicine and treatment plans. While valuable for instruction tuning, the absence of expert validation raises concerns over factual accuracy and stylistic fidelity. CHBench¹² introduced a safety-focused benchmark with 9,492 community-sourced questions, highlighting deficiencies in LLM reliability under ethically sensitive conditions. However, its scope remains narrow. MedBench¹³ represents the most comprehensive Chinese medical LLM evaluation to date, integrating 20 datasets and over 300,000 questions across diverse tasks, including QA, clinical case analysis, diagnostic reasoning, and summarization. The platform supports dynamic sampling and randomized option ordering to prevent overfitting. However, access to API use is restricted due to data privacy concerns. Benchmarks like CMB¹⁴ and CMExam¹⁵ further extend to structured exam QA, offering high coverage but lacking realistic patient–physician interaction. Table 1. Comparison of TCM-Ladder with existing question answering datasets. TCM-Ladder distinguishes itself from existing datasets in several key aspects. First, it establishes a large-scale, open-ended QA dataset that spans a wide range of TCM subfields, including basic theory, diagnostics, internal medicine, surgery, pediatrics, and pharmacology. This breadth enables more thorough and representative evaluation of TCM-specific LLMs across multiple knowledge domains. Second, TCM-Ladder integrates visual elements such as herbal medicine images and tongue diagnostics. This multimodal design reflects traditional TCM diagnostic practices, requiring LLMs to demonstrate both textual reasoning and visual understanding capabilities. Third, TCM-Ladder incorporates a variety of task formats. This comprehensive task structure facilitates an in-depth evaluation of the strengths and limitations of LLMs, providing guidance for the future development of TCM-specific models. # 3. TCM-Ladder Datasets # 3.1 Data Collection We collected a question-answering dataset covering various domains of TCM, including several publicly available datasets previously published in academic literature under permissive licenses. For the textual data, we identified seven subfields: fundamental theory, diagnostics, herbal formulas, internal medicine, surgery, pharmacognosy, and pediatrics. For the Chinese herbal medicine image data, we collected over 6,061 images of medicinal herbs based on the herb names referenced in The Pharmacology of Chinese Herbs [34]. The dataset comprises images sourced from publicly available online resources, as well as photographs we captured at traditional Chinese medicine manufacturing facilities. Sample images and collection process are provided in Appendix G. The clinical tongue images were collected by a tongue imaging device [14] at Shanghai University of Traditional Chinese Medicine. This device is designed for tongue diagnosis and provides stable and consistent lighting conditions during image acquisition. Another subset of the proprietary data was obtained from our previous work, the iTongue [35,36] diagnostic software. All data collection procedures were approved by the institutional ethics review board. To protect the privacy of tongue image contributors, only a subset of tongue image patches and corresponding labels have been released. The video data was recorded by faculty members from the Department of Acupuncture and Tuina at Shanghai University of Traditional Chinese Medicine. These instructional videos cover essential techniques, procedural explanations, and key operational steps. Audio and pulse diagnosis data were sourced from publicly available datasets referenced in academic publications[37–40]. A detailed list is available in the supplementary materials. We manually filtered and removed samples with poor quality or missing information from the collected data. # 3.2 Construction of the Datasets The textual question-answering (QA) data consist of two parts. The first part comprises 5,000 TCM-related QA pairs manually written by licensed TCM practitioners under a standardized question design protocol (see Appendix I). To ensure answer accuracy, each question was independently reviewed and verified by two additional TCM physicians. The second part of the textual QA data was collected from publicly available sources, including the National Physician Qualification Examination of China and various open-access online resources. Detailed data sources and construction guidelines are provided in Appendix B. The visual question-answering (VQA) tasks were constructed through both manual annotation and automated generation based on existing knowledge bases. For the manually created subset, domain experts selected high-quality images from the Chinese herbal medicine image repository and generated corresponding questions based on each herb's name and medicinal properties. The automatically generated subset was produced through a procedural pipeline. For example, an image labeled as Astragalus membranaceus (Huangqi) was selected as the correct answer, while three distractor images were randomly sampled from the knowledge base. A question was then constructed using a predefined template library, such as “Which of the following images shows Huangqi?” The design of tongue image understanding tasks followed a similar approach. Details of the construction process and implementation code can be found in Appendix G. # 3.3 Deduplication and Preprocessing Detecting duplication and semantic similarity in the data is critical for both model evaluation and training, as it helps prevent evaluation failures and reduces the risk of overfitting caused by redundant content. Given the diverse sources of the original data, we conducted a comprehensive similarity detection process on the aggregated dataset and removed highly similar questions to enhance overall data quality. The methods employed included string edit distance [41], TF-IDF [42,43] with cosine similarity, and BERT-based [44,45] semantic encoding. Subsequently, all questions and answers were manually reviewed by two licensed physicians. The selection criteria and detailed experimental procedures are provided in Appendix I. Subsequently, we divided the dataset into three subsets: $10 \%$ for evaluation, $10 \%$ for validation, and $80 \%$ for training. To ensure balanced representation, each subset contains question-answer pairs spanning all subfields. # 3.4 Datasets Statistics Table 2 presents the statistics of all constructed question-answer pairs across different categories. The TCM-Ladder dataset comprises a total of 52,169 TCM-related QA instances, including 4,261 Chinese herbal medicine images and 512 annotated tongue image patches. The distribution of each data type is illustrated in Figure 2. Table 2. Statistics of the collected questions. Figure 2. Data distribution and length statistics in TCM-Ladder. The left illustrates the dataset composition across text, image, and audio modalities, along with TCM subfields. The right plots show the distribution of question and answer lengths. # 4. Ladder-Score: Evaluating free-form question answering presents notable challenges, as the responses are often descriptive and lack a predefined standard format. This issue is further exacerbated in the context of TCM diagnostic tasks, where large language models are capable of generating diverse and nuanced answers. Even when the expressions differ, the underlying responses may still be factually correct. Traditional evaluation metrics such as BLEU [46] and ROUGE [47] often fail to capture this semantic equivalence adequately. Recently proposed methods [48–50] employ instruction-tuned models to score candidate answers on a rubric-based scale. We propose a novel evaluation metric for TCM question answering, named Ladder-Score. This score comprises two components: TermScore, which assesses the accuracy and completeness of TCM terminology usage, and SemanticScore, derived from large language models to evaluate multiple aspects including logical consistency, semantic accuracy, comprehensiveness of knowledge, and fluency of expression. As shown in Equation (1), the Ladder-Score is a weighted combination of these two components: $$ \operatorname { L a d d e r } - { \mathrm { S c o r e } } = \alpha \cdot { \mathrm { T e r m S c o r e } } + \beta \cdot S e m a n t i c S c o r e $$ where ${ \bf { \delta q } } = { \bf { 0 . 4 } }$ and ${ \bf \beta } \mathbf { \beta } = \mathbf { 0 . 6 }$ , which can be adjusted based on practical needs. The scoring criteria, terminology dictionary, and calculation examples can be found in Appendix H. # 5. Experiments # 5.1 Experiment Setup We evaluated 9 state-of-the-art general domain LLMs and 5 TCM-specific models on the TCMLadder dataset across five task settings: single-choice questions, multiple-choice questions, fillin-the-blank questions, image-based understanding, and long-form dialogue tasks. Evaluations were conducted under zero-shot settings, and models received only the task instructions as input. For single-choice and image understanding tasks, we used the Top-1 prediction accuracy [51] as the primary evaluation metric. For multiple-choice tasks, we adopted exact match accuracy to assess performance comprehensively. For fill-in-the-blank and long-form dialogue tasks, we evaluated models using metrics such as accuracy, BLEU, ROUGE, METEOR and BERTScore. # 5.2 Model Training We trained two models using the TCM-Ladder dataset. The first is Bencao [21], an online model fine-tuned from ChatGPT, and the second is Ladder-base, which is built upon the pretrained Qwen2.5-7B-Instruct [52] model and enhanced with Group Relative Policy Optimization (GRPO) [53] to improve its reasoning capabilities. The Bencao model was trained on knowledge extracted from over 700 classical Chinese medicine books, none of which contained any question-answer pairs. Additionally, the training subset of TCM-Ladder was used as its knowledge base. The GRPO stage for Ladder-base was conducted on two NVIDIA A100 PCIe GPUs (80GB each). The temperature and top-p sampling of Ladder-base were 0.7 and 0.8. Training was performed for 2 epochs with a group size of 6 and a batch size of 12, resulting in a total training time of approximately 60 hours. Model training and inference were implemented using HuggingFace Transformers, while the GRPO process was carried out using the TRL (Transformer Reinforcement Learning) library [54]. Details of the training process can be found in Appendix C. # 5.3 Human Evaluation We conducted a human evaluation on $20 \%$ of the TCM-Ladder test set. Due to the coverage of multiple subfields, establishing a reliable human upper bound poses a significant challenge, as accurately answering questions across all domains requires extensive interdisciplinary expertise. To investigate this issue, we recruited two licensed clinical TCM physicians, who were not involved in the original data annotation. Human evaluators were asked to select the correct answers based on the question stems and to identify the correct herbal medicine and tongue images. In terms of top-1 accuracy for answer retrieval, the human evaluators achieved a performance of $64 \%$ , which was approximately $4 \%$ lower than that of the best-performing model (Bencao). This suggests that LLMs may already possess strong comprehension capabilities in the domains of herbal medicine and tongue image recognition. # 5.4 Main Results # 5.4.1 Text-Based Single and Multiple-Choice Question Answering As shown in Figure 3, Ladder-base (ours) consistently outperforms other models across all subject areas, achieving the highest overall accuracy. Notably, its performance is especially strong in Pharmacognosy, Herbal Formulas, and Pediatrics, where exact match scores exceed 0.85. Our other model, Bencao (ours), also demonstrates robust performance, particularly in Diagnostics and Internal Medicine. Among the general domain LLMs, Gemini 2.5 Pro, Deepseek, and Tongyi Qwen show relatively stable accuracy across domains, with scores ranging from 0.65 to 0.75, though they still fall short compared to domain-specialized models. In contrast, Claude 3, GPT-4o mini, and Bentsao underperform, especially in the more clinically nuanced domains such as Surgery and Pediatrics, suggesting limited capability in handling complex, multi-faceted TCM tasks. These findings highlight the advantage of domain-specific fine-tuning and multi-source integration, as utilized in Ladder-base, for enhancing the accuracy and generalization of LLMs on structured TCM knowledge assessments. Figure 3. Performance of general-domain and TCM-specific language models on single and multiple-choice question answering tasks # 5.4.2 Visual Question Answering (working on description) To further assess the models' capability in visual understanding tasks within Traditional Chinese Medicine (TCM), we evaluated 10 large language models (LLMs) on two image-based benchmarks: Herbs classification and Tongue image diagnosis. As illustrated in Figure 4, performance varies considerably across models. Among the evaluated models, Bencao (ours) achieves the highest accuracy in both tasks, with over $80 \%$ on herb recognition and above $65 \%$ on tongue classification, demonstrating strong multimodal understanding grounded in TCMspecific training. General domain LLMs such as Gemini $2 . 5 \ : \mathrm { P r o }$ , Gemini 2.0 Flash, and Tongyi Qwen exhibit moderate performance, with herb classification accuracy around $6 5 - 7 5 \%$ , but show a relative drop in tongue image tasks (around $50 \text{‰}$ ), likely due to the greater complexity and domain specificity of tongue diagnosis. In contrast, models like GPT-4o, Claude 3, Kimi, and Grok3 demonstrate limited performance, particularly in the tongue classification task, where accuracies often fall below $40 \%$ . This reveals their insufficient visual comprehension of TCM-related imagery. It is worth noting that models such as Ladder-base and Zhongjing are not included in this figure, as they are not equipped with image understanding capabilities at this stage. Their current design focuses on structured textbased TCM evaluation and does not support visual input. Figure 4. The performance of large language models on questions regarding Chinese herbal medicine and tongue images. # 5.4.3 Diagnostic dialogue and Fill-in-the blank Questions As shown in Table 3, in the diagnostic dialogue task, our model Ladder-base achieved the highest scores in BLEU-4 (0.0249), ROUGE-L (0.2431), and METEOR (0.2268), while also maintaining a strong Ladder-score (0.803). This indicates that Ladder-base generates answers with high lexical similarity, semantic accuracy, and alignment with TCM diagnostic logic. Notably, Tongyi Qwen achieved the best Ladder-score (0.861) and the highest METEOR (0.2328), showcasing its strength in generating fluently worded responses. Bencao (ours) achieved the best BERTScore (0.9663), reflecting its semantic closeness to gold references. In the fill-in-the-blank task, Bencao significantly outperformed all other models, achieving the highest Exact Match Accuracy of 0.9034, followed by Tongyi Qwen (0.8786) and Deepseek (0.874). Our Ladder-base model also performed competitively with 0.8623 accuracy, further demonstrating its generalizability beyond free-form dialogue. Overall, the results demonstrate that Ladder-base excels in structured diagnostic dialogue tasks, generating semantically accurate and logically coherent responses, while Bencao shows outstanding performance in fill-in-theblank tasks, reflecting strong factual recall and precise terminology usage. Domain-specific models consistently outperform general domain LLMs, particularly in tasks that require accurate retrieval of structured TCM knowledge and professional terms. Table 3. Performance of Different Models on Diagnostic Dialogue and Fill-in-the-Blank Questions # 6 Application Website In addition to releasing the raw dataset, we provide access to all TCM-Ladder data and leaderboard results through an interactive website (https://tcmladder.com/). This platform enables researchers to explore, verify, and contribute to the open-access data. We encourage the research community to submit additional data through the platform, and we intend to expand the dataset continuously as part of our ongoing efforts. Our objective is to establish a long-term and reliable data foundation for the training and evaluation of TCM-specific LLMs. # 7. Limitations and Societal Impact Although TCM-Ladder encompasses question-answer pairs from multiple disciplines within TCM, its current scale remains insufficient to cover the full breadth of TCM knowledge. TCM diagnosis is inherently a multimodal process—textual information represents only one component. At present, the utilization of data related to tongue diagnosis, pulse diagnosis, and olfactory inspection remains limited, and these modalities require further supplementation and enrichment. Expanding and continuously updating the scope and scale of data included in TCMLadder will be a critical direction for future research.
Traditional Chinese Medicine (TCM), as an effective alternative medicine, has been receiving increasing attention. In recent years, the rapid development of large language models (LLMs) tailored for TCM has underscored the need for an objective and comprehensive evaluation framework to assess their performance on real-world tasks. However, existing evaluation datasets are limited in scope and primarily text-based, lacking a unified and standardized multimodal question-answering (QA) benchmark. To address this issue, we introduce TCM-Ladder, the first multimodal QA dataset specifically designed for evaluating large TCM language models. The dataset spans multiple core disciplines of TCM, including fundamental theory, diagnostics, herbal formulas, internal medicine, surgery, pharmacognosy, and pediatrics. In addition to textual content, TCM-Ladder incorporates various modalities such as images and videos. The datasets were constructed using a combination of automated and manual filtering processes and comprise 52,000+ questions in total. These questions include single-choice, multiple-choice, fill-in-the-blank, diagnostic dialogue, and visual comprehension tasks. We trained a reasoning model on TCM-Ladder and conducted comparative experiments against 9 state-of-the-art general domain and 5 leading TCM-specific LLMs to evaluate their performance on the datasets. Moreover, we propose Ladder-Score, an evaluation method specifically designed for TCM question answering that effectively assesses answer quality regarding terminology usage and semantic expression. To our knowledge, this is the first work to evaluate mainstream general domain and TCM-specific LLMs on a unified multimodal benchmark. The datasets and leaderboard are publicly available at https://tcmladder.com or https://54.211.107.106 and will be continuously updated.
[ "cs.CL", "cs.DB" ]
# 1 Introduction Code-switching (CSW)—the act of alternating between two or more languages within a single discourse (Das et al., 2023; Zhang et al., 2023; Ochieng et al., 2024)—is a common phenomenon in multilingual communities (Bullock and Toribio, 2009; Parekh et al., 2020; Do˘gruöz et al., 2021), and increasingly prevalent in online content (Kodali et al., 2024), where users naturally mix languages in everyday informal communications. Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide range of natural language processing tasks (Zhao et al., 2023). As they are increasingly used to process and generate content, the widespread availability of code-switched inputs makes it crucial to understand how LLMs reason about such mixed-language data, and whether their multilingual fluency reflects genuine understanding or superficial pattern matching (Zhang et al., 2023). To systematically assess LLMs’ handling of such data, we turn to insights from linguistic theories that define the structural constraints governing natural CSW. Linguistic theories have long studied the structure of CSW text, proposing formal constraints on permissible switch points, such as the Equivalence Constraint Theory (ECT), which posits that switches occur at positions where the surface structures of both languages are grammatically compatible (Poplack, 1978), and the Matrix Language Frame model (MLF), which distinguishes between a Matrix Language (ML) that provides the grammatical frame of the clause and an Embedded Language (EL) that contributes inserted content without disrupting this structure (Myers-Scotton, 1993). These frameworks aim to identify the grammatical boundaries and syntactic compatibility that make CSW possible and natural. While such theories offer testable hypotheses for analyzing CSW, current efforts in synthetic CSW generation often prioritize producing fluent mixed-language text over probing whether LLMs genuinely internalize and apply these structural constraints in their reasoning (Pratapa et al., 2018; Potter and Yuan, 2024; Kuwanto et al., 2024; Heredia et al., 2025). Despite the availability of well-established linguistic theories, existing evaluation benchmarks fall short of leveraging these insights to assess deeper comprehension in code-switched contexts. Current benchmarks for evaluating the CSW capabilities of language models primarily focus on surface-level tasks (Khanuja et al., 2020; Aguilar et al., 2020; Patwa et al., 2020). However, they largely overlook the challenge of evaluating deeper reasoning and semantic understanding in mixedlanguage settings (Yadav et al., 2024; Gupta et al., 2024; $\mathrm { N g }$ and Chan, 2024), leaving a critical gap in assessing the true extent of LLMs’ code-switched Input Output 米 : Hume says that beauty is 米 : (D) : Hume says that لامجلا is (A) × : Hume says that 美 is (B) × : Hume says that la beauté is (D) : Hume says that Schönheit is (C) (A) a quality in things themselves (B) a matter of a priori knowledge Choic e s : (C) judged by logical standards (D) no quality in things themselves comprehension abilities. To address these gaps, we introduce a systematic evaluation framework that leverages a constrained, multi-step LLM pipeline to generate linguistically grounded code-switched variants of established benchmarks in reading comprehension, multi-domain knowledge, and natural language inference. Code and data are publicly available1. Our experiments reveal that code-switching has a nuanced impact on LLM comprehension, influenced by the languages involved and the switching style, as illustrated by the example in Figure 1. In particular: Embedding non-English tokens into an English matrix language consistently degrades performance, even when the switches follow linguistic constraints, suggesting a structural vulnerability that cannot be explained solely by token-level unfamiliarity. Embedding English tokens into non-English matrix languages often improves comprehension, especially for models with limited proficiency in the matrix language, indicating a facilitative role for English in such contexts. While strategic prompting can help some models, it negatively affects others, highlighting inconsistency in controllability; by contrast, fine-tuning on code-switched data leads to more stable, albeit partial, performance recovery. Our work advances the ongoing debate over how LLMs process the mixed-language content that now permeates social media, messaging apps, and other corners of the web. We show that models falter when non-English tokens disrupt an English sentence, yet paradoxically grow more confident when English words are embedded in other languages. This asymmetric behavior reveals a structural imbalance and raises broader concerns about linguistic equity as LLM-generated text is recycled, re-posted, and ultimately re-learned by future models. # 2 Related Work Code-Switching in Language Models. Early multilingual encoder-based models (e.g., mBERT (Devlin et al., 2019), XLM-R (Conneau et al., 2020)), while effective on monolingual tasks, consistently faltered on code-switched inputs (Winata et al., 2021a). This gap spurred specialized methods for mixed-language text, including new architectures and training regimes (Winata et al., 2019; Liu et al., 2020; Winata et al., 2021b). Although existing benchmarks (Khanuja et al., 2020) supported these efforts, research predominantly focused on encoder-centric models (Winata et al., 2019; Tan and Joty, 2021; Zhu et al., 2023). Consequently, decoder-only architectures, now central to state-ofthe-art NLP, have received markedly less scrutiny regarding CSW. While some studies probed adversarial code-mixing in autoregressive models (Das et al., 2022), meaningful evaluation of such models requires access to high-quality, linguistically coherent code-switched text. This has motivated growing interest in controlled CSW text generation. Code-Switched Text Generation. Synthetic code-switched text generation plays a critical role in data augmentation and diversification for multilingual language models (Pratapa et al., 2018; Zhang et al., 2023). Methods range from linguistically motivated approaches—such as the Equivalence Constraint Theory (ECT) (Poplack, 1978) and Matrix Language Frame (MLF) model (MyersScotton, 1993)—to heuristic token-level substitutions (Myslín, 2014; and, 2018; Chan et al., 2024). Recent work often relies on word-level aligners to guide borrowing from embedded-language texts while preserving grammatical structure (Kuwanto et al., 2024). Although these techniques aim for token-level accuracy, they overlook the growing capacity of LLMs to perform context-aware, linguistically grounded substitutions. Leveraging this potential, recent studies have explored LLM-based generation using linguistic constraints (Kuwanto et al., 2024), fine-tuning on CSW data (Heredia et al., 2025), or zero-shot prompting (Potter and Yuan, 2024). Still, challenges remain in controlling switch placement, scaling across language pairs, and conducting robust evaluation. Our work addresses these challenges by leveraging modern LLMs to generate linguistically grounded codeswitched text, grounded in established theoretical constraints, to support more rigorous evaluation of model comprehension in mixed-language contexts. Evaluation of LLM CSW Capabilities. LLM CSW evaluation has largely focused on surfacelevel tasks through benchmarks like GLUECoS (Khanuja et al., 2020), LINCE (Aguilar et al., 2020), and SemEval (Patwa et al., 2020) (e.g., language ID, sentiment, PoS tagging), thus neglecting deeper semantic or reasoning capabilities. Although more recent studies assess CSW sentiment classification (Winata et al., 2021a), and question answering (Huzaifah et al., 2024), they are limited in scope, emphasizing task-specific metrics over broader comprehension. In contrast, our approach introduces linguistically grounded CSW variants of established comprehension and reasoning tasks, enabling a more rigorous assessment of LLMs’ capacity to reason over mixed-language input beyond surface-level performance. # 3 Methodology # 3.1 Notations $$ \boldsymbol { B } = \{ B _ { p } \} _ { p = 1 } ^ { P } $$ be a set of $P$ standard benchmarks. Let $$ \mathcal { L } = \{ l _ { j } \} _ { j = 1 } ^ { L } $$ be a set of $L$ languages from which the matrix and embedded languages are selected for code switched benchmarks generation. Let $$ \mathcal { M } = \{ m _ { k } \} _ { k = 1 } ^ { K } $$ be a set of $K$ LLMs. To evaluate the performance of an LLM $m _ { k } \in$ $\mathcal { M }$ on code-switched text comprehension, we generate a code-switched version of benchmark $B _ { p } \in$ $\boldsymbol { B }$ using a single matrix language $l _ { \mathrm { m a t r i x } } \in \mathcal { L }$ and a set of embedded languages ${ \mathcal { L } } _ { \mathrm { e m b e d d e d } }$ , where ${ \mathcal { L } } _ { \mathrm { e m b e d d e d } } \subseteq { \mathcal { L } } \setminus l _ { \mathrm { m a t r i x } }$ and $| { \mathcal { L } } _ { \mathrm { e m b e d d e d } } | \geq 1$ , which we denote by Blmatrix→Lembedded . # 3.2 CSW Methods To investigate how different CSW strategies affect LLM comprehension, we generate inputs using two distinct approaches: a linguistically grounded nountoken method (Poplack, 1988; Muysken, 2000; Moyer, 2002; Chan et al., 2024) and a heuristic ratio-token method (Chan et al., 2024). In the noun-token method, we replace nouns in the matrix language text with their aligned counterparts from a parallel sentence in the embedded language. Substitutions are only applied when they preserve grammatical well-formedness according to the Equivalence Constraint Theory and the Matrix Language Frame model, which mandates that the matrix language maintains control over the clause’s morpho-syntactic structure. In contrast, the ratio-token method replaces a ratio of tokens at random, regardless of linguistic structure. This comparison allows us to isolate the role of syntactic and grammatical constraints in LLM comprehension of code-switched text. # 3.3 Code-Switched Text Generation Approaches Given a parallel corpus, we create code-switched sentences by swapping embedded–language words into matrix–language sentences. To this end, we evaluated two distinct methods for code-switched text generation: an alignment-based method and an LLM-centric method. Alignment-based method. We first align the matrix- and embedded-language sentences with the AWESOME aligner (Dou and Neubig, 2021) enhanced by LaBSE embeddings (Feng et al., 2022). Two variants guide how words are substituted. In the noun-token variant, we use Stanza POS tagger (Qi et al., 2020) to locate matrix-language nouns and replace each with its aligned counterpart from the embedded-language sentence, prompting Claude 3.5 Sonnet (hereafter Claude) to perform the replacements, ensuring that the switch respects the Equivalence Constraint Theory and the Matrix Language Frame model. In the ratio-token variant, $\approx 2 0 \%$ of aligned tokens are chosen at random and replaced, intentionally relaxing all linguistic constraints to match the setup of Chan et al. (2024). LLM-centric method. Inspired by recent work showing that large language models can fluidly generate code-switched text (Potter and Yuan, 2024), we let Claude perform a two-step procedure. First, Claude rewrites the matrix-language sentence while inserting masked placeholders at candidate switch points—nouns for the noun-token variant and randomly selected tokens for the ratiotoken variant. Second, in a subsequent and independent step, Claude fills each placeholder with a context-appropriate word taken from the embedded-language sentence, yielding the final code-switched output. # 3.4 Code-Switching Approach Evaluation For each embedded language, we assembled a 300- sample test-set, and generated code-switched variants using both approaches from Section 3.3. GPT4o then conducted blind, pairwise comparisons under the LLM-as-a-Judge framework (Zheng et al., 2023), evaluating fluency, depth of mixing, grammatical validity at switch points, and overall coherence. In every case, GPT-4o preferred the two-step LLM-Centric approach, demonstrating its superior capacity to produce high-quality, linguistically coherent code-switched text (See Appendix B for details on the embedding model, LLM setup, and CSW approach selection and evaluation). # 3.5 Evaluation Metrics We evaluate models using three key metrics to capture baseline performance and the effects of codeswitching: accuracy, weighted average accuracy, and accuracy delta. Accuracy. For a model $m _ { k } \in \mathcal { M }$ and benchmark $B ^ { \prime }$ , whether a monolingual test $B _ { p } \in B$ or its code-switched variant $B _ { p } ^ { l _ { \mathrm { m a t r i x } } { \cal C } }$ embedded , we define accuracy as: $$ \begin{array} { l } { \displaystyle \mathrm { A c c } ( m _ { k } , B ^ { \prime } ) = } \\ { \displaystyle \frac { 1 } { | B ^ { \prime } | } \sum _ { i = 1 } ^ { | B ^ { \prime } | } \Im ( \mathrm { C o r r e c t } ( m _ { k } , \mathrm { i n s t a n c e } _ { i } ) ) , } \end{array} $$ where $| B ^ { \prime } |$ denotes the number of samples in benchmark $B ^ { \prime }$ , instance $\mathsf { \Pi } _ { i } ^ { \phantom { } \bullet }$ is its $i$ -th example, and $\Im ( \cdot )$ is the indicator function. Weighted Average Accuracy. To report an aggregate performance measure for a model $m _ { k }$ across multiple benchmarks $\boldsymbol { B }$ , we compute the weighted average accuracy as: $$ \begin{array} { r l } & { \mathrm { A c c } _ { \mathrm { w e i g h t e d } } ( m _ { k } , l _ { \mathrm { m a t r i x } } , \mathcal { L } _ { \mathrm { e m b e d d e d } } ) = } \\ & { \frac { \sum _ { B _ { p } \in \mathcal { B } } | B _ { p } | \cdot \mathrm { A c c } ( m _ { k } , B _ { p } ^ { l _ { \mathrm { m a t r i x } } \mathcal { L } _ { \mathrm { e m b e d d e d } } } ) } { \sum _ { B _ { p } \in \mathcal { B } } | B _ { p } | } , } \end{array} $$ Accuracy Delta (∆Acc). We quantify the codeswitching impact by computing the accuracy delta, i.e., the difference between a model’s score on the code-switched benchmark and its score on the original monolingual benchmark, as: $$ \begin{array} { r l } & { \Delta \mathrm { A c c } ( m _ { k } , B _ { p } ^ { l _ { \mathrm { m a t r i x } } \angle _ { \mathrm { e m b e d d e d } } } ) = } \\ & { \mathrm { A c c } ( m _ { k } , B _ { p } ^ { l _ { \mathrm { m a t r i x } } \angle _ { \mathrm { e m b e d d e d } } } ) - \mathrm { A c c } ( m _ { k } , B _ { p } ) . } \end{array} $$ Positive $\Delta \mathrm { A c c }$ indicates an improvement under code-switching, negative values a drop. # 4 Experimental Setting Languages selection We consider a set of languages $$ \mathcal { L } = \{ \mathrm { E n g l i s h , A r a b i c , G e r m a n , F r e n c h , C h i n e s e } \} $$ We hypothesize that this set creates varying degrees of semantic, lexical, and syntactic similarities between the matrix language and the embedded languages set, which may differentially affect the degradation caused by CSW, akin to effects observed in machine translation (Guerin et al., 2024; Mohamed et al., 2025). Models selection We evaluated LLaMA 3.2 Instruct (3B) and LLaMA 3.1 Instruct (8B, 70B) (Grattafiori et al., 2024), Qwen 2.5 Instruct (3B, 7B, 72B) (Yang et al., 2025), Mistral 7B Instruct (v0.3) (Albert et al., 2023), and ALLaM 7B (Bari et al., 2024), encompassing a wide range of scales and pretraining curricula. Allam currently represents the state-of-the-art in Arabic LLMs, while Qwen and Mistral excel in Chinese and French, respectively, even as they maintain strong multilingual capabilities. The Llama family delivers consistently robust multilingual performance, enabling us to isolate the effects of architecture and model scale on CSW resilience. Benchmarks selection We assess LLM comprehension on three established tasks: Belebele (Bandarkar et al., 2023) for passage-level reading comprehension (with both passages and questions code-switched), $M M L U ^ { 2 }$ (Hendrycks et al., 2020) for broad-domain multiple-choice reasoning (codeswitching applied to questions), and XNLI (Conneau et al., 2018) natural language inference (both premise and hypothesis code-switched). To ensure consistent, scalable evaluation across models, we used and adapted EleutherAI’s Language Model Evaluation Harness (Gao et al., 2024) for our codeswitched variants. Table 1: Weighted average accuracy of selected LLMs on noun-token code-switched benchmarks $\mathbf { \mathrm { E N } } \to \mathbf { \mathrm { A R } }$ , $\mathrm { E N } { } \mathrm { D E }$ , ${ \mathrm { E N } } { } \mathrm { F R }$ , $\mathrm { E N } { } \mathrm { Z H }$ ) compared to the monolingual English baseline. Cell colors indicate relative performance from highest (green) to lowest (red). The highest scores are indicated in bold. # 5 Experiments # 5.1 Experiment 1: Linguistically motivated CSW Setup We use English as the matrix language $l _ { \mathrm { { m a t r i x } } }$ , and perform CSW on the benchmarks with each language in $\mathcal { L } \setminus l _ { \mathrm { m a t r i x } }$ as the embedded language separately, using the noun-token CSW method, and compare the performance of the codeswitched benchmarks with the original English benchmarks. Hypothesis 1 (H1) We hypothesize that LLM performance on code-switched benchmarks degrades in proportion to the linguistic distance between the matrix and embedded languages. Results Table 1 and Figure 2 show consistent drops in LLM performance on noun-token codeswitched benchmarks compared to their English versions. The extent of degradation varied by embedded language and model. For example, LLaMA-70B’s weighted average accuracy declined from 0.70 (English) to 0.66 on $\mathrm { E N } { } \mathrm { A R } / \mathrm { E N } { } \mathrm { D E }$ $( \Delta \approx - 0 . 0 4 )$ and 0.67 on $\mathrm { E N } { } \mathrm { Z H }$ $( \Delta \approx - 0 . 0 3 )$ . Mistral-7B showed minimal loss on ${ \mathrm { E N } } { } \mathrm { F R }$ $( \Delta \approx - 0 . 0 1 )$ , and ALLaM-7B retained relatively strong performance on $_ \mathrm { E N A R }$ $( \Delta \approx - 0 . 0 6 )$ . Qwen models exhibited consistent degradation across languages (e.g., Qwen-7B: $\Delta \approx - 0 . 0 3$ to $- 0 . 0 6 )$ , with larger models achieving better absolute scores but similar relative drops. These trends held across all three tasks, underscoring both the general difficulty of CSW and the role of languagespecific model strengths. # 5.2 Experiment 2: Non-linguistically motivated CSW Setup In this experiment, we retain the experimental framework of Experiment 1, replacing the linguistically motivated noun-token CSW method with the ratio-token method. Hypothesis 2 (H2) We hypothesize that nonlinguistically motivated CSW leads to sharper performance degradation in LLMs than that observed on linguistically motivated CSW, as such input is less likely to align with patterns encountered during pre-training. Results Results are show in Table 2. All models exhibited a decline in weighted average accuracy, consistent with the patterns observed in Experiment 1. The extent of degradation varied with model size and language pairing. Smaller models experienced the most pronounced drops; for example, Llama 3B decreased from 0.54 (EN) to 0.43 on $\mathrm { E N } { } \mathrm { D E }$ $( \Delta = - 0 . 1 1 )$ and to 0.47 on ${ \mathrm { E N } } { } { \mathrm { A R } }$ ( $\Delta = - 0 . 0 7 \$ . In contrast, Llama $7 0 B$ showed minimal degradation, with weighted average accuracy decreasing from 0.70 to 0.68 across all embedded languages ( $\Delta \approx - 0 . 0 2 )$ . Language-specific resilience was also observed. Allam $7 B$ and Mistral $7 B$ relatively strong performance on ${ \mathrm { E N } } { } { \mathrm { A R } }$ on ${ \mathrm { E N } } { } \mathrm { F R }$ , respectively. Qwen $7 B$ exhibited consistent, moderate degradation, decreasing from 0.61 to a range of 0.53–0.57 depending on the embedded language $\Delta = - 0 . 0 8$ to $- 0 . 0 4 )$ . # 6 Ablations Building on Section 5, which found comparable degradation from noun-token and ratio-token CSW, we proceed with ablation studies using exclusively the noun-token method. Figure 2: Comparison of LLM accuracy on monolingual English versions of Belebele, MMLU, and XNLI benchmarks (baseline) versus their noun-token code-switched counterparts. English serves as the matrix language, with Arabic $_ \mathrm { E N \to A R }$ ), French $\mathrm { E N } { } \mathrm { F R }$ ), German ( $\mathrm { E N } { } \mathrm { D E }$ ), and Chinese $\mathrm { E N } { } \mathrm { Z H }$ ) as embedded languages. Table 2: Weighted average accuracy of selected LLMs on ratio-token code-switched benchmarks $\mathbf { \mathrm { E N } } \to \mathbf { \mathrm { A R } }$ , $\mathrm { E N } { } \mathrm { D E }$ , E $_ { \mathrm { V F R } }$ , ${ \mathrm { E N } } { } { \mathrm { Z H } }$ ) compared to the monolingual English baseline. Cell colors indicate relative performance from highest (green) to lowest (red). The highest scores are indicated in bold. # 6.1 English as an embedded language To assess whether embedding English improves comprehension in other matrix languages, we reversed the language roles from the main experiments, using each language in ${ \mathcal { L } } \setminus l _ { \mathrm { m a t r i x } }$ as the matrix language, and English as the sole embedded language. We generated code-switched versions $( \bar { B } _ { p } ^ { l _ { \mathrm { m a t r i x } } \{ \mathrm { E n g l i s h } \} } )$ of the Belebele, MMLU, and XNLI benchmarks. By comparing model performance on these variants against their original monolingual counterparts, we aimed to assess any comprehension enhancement attributable to the embedded English words. Results are presented in Table 3. Embedding English into lower-resource matrix languages often improved model performance or, at minimum, avoided large degradations. Gains were especially prominent when models lacked proficiency in the matrix language. For instance, Mistral 7B’s weighted average accuracy in Arabic rose from 0.35 to 0.48 ( $\Delta = + 0 . 1 3 )$ , while its score in Chinese increased by $+ 0 . 0 7$ points. In contrast, when models already demonstrated strong matrix language proficiency, improvements were minimal or absent. Allam $7 B$ (Arabic) and Mistral $7 B$ (French) saw gains of only $+ 0 . 0 1$ and $+ 0 . 0 3$ , respectively. High-performing models such as Llama 70B and Qwen 72B showed no change in several settings. Only one case showed a minor drop: Qwen 7B on Chinese $( \Delta \approx - 0 . 0 1 )$ . This suggests that embedded English may introduce interference when matrix language representations are already strong. Table 3: Weighted average accuracy of LLMs on monolingual (Orig) versus English-embedded code-switched (CSW) benchmarks across Arabic, German, French, and Chinese, rounded to two decimals. Bold indicates the higher score in each Orig/CSW pair. Italic indicates instances where performance did not change between the original and code-switched versions. # 6.2 When Code-Switching Goes Extreme To assess performance under more complex multilingual mixing, an "extreme" CSW experiment was conducted on the $M M L U$ benchmark. English served as the matrix language, with nouns code-switched using three distinct embedded languages sets: Setting 1 featured a non-Latin script pair $( \mathcal { L } _ { \mathrm { e m b e d d e d } } = \{ \mathrm { A r a b i c } , \mathrm { C h i n e s e } \} )$ , Setting 2 used a Latin script pair $( { \mathcal { L } } _ { \mathrm { e m b e d d e d } } =$ {French,German}), and Table 4: MMLU accuracy for extreme CSW with English as the matrix language and the embedded languages being Arabic and Chinese (Setting 1), French and German (Setting 2), and Arabic, Chinese, French, and German (Setting 3), alongside the monolingual English baseline. The highest scores are indicated in bold. Setting 3 combined all four languages $\begin{array} { r } { ( \mathcal { L } _ { \mathrm { e m b e d d e d } } = \left\{ \begin{array} { r l r l } \end{array} \right. } \end{array}$ Arabic,Chinese,French,German ). For generating the code-switched text across these settings, Claude was, additionally, prompted to borrow words evenly from the specified embedded languages for each instance. Table 4 demonstrates that all models experience a decline in MMLU accuracy under extreme code-switching relative to the monolingual English baseline. For example, Llama 70B’s score decreases from 0.77 to between 0.70 and 0.72, and Qwen 72B’s from 0.77 to 0.73–0.74. Analyzing language-script effects by comparing the non-Latin mix (Setting 1) against the Latin mix (Setting 2) reveals no uniform penalty for non-Latin scripts. Allam 7B achieves a higher accuracy with the non-Latin pair (0.56 vs. 0.54), whereas Mistral $7 B$ performs better with the Latin pair (0.56 vs. 0.53). Moreover, extending the embedded set to all four languages (Setting 3) does not invariably yield the lowest scores, while Llama 70B (0.70) and Qwen 72B (0.73) record their minima in Setting 3, other models exhibit accuracies intermediate between those in Settings 1 and 2. # 7 Mitigation strategies To mitigate the performance declines induced by CSW, we investigate two strategies: a promptbased approach, which prepends explicit instructions to code-switched inputs, and a model-based approach, which fine-tunes LLMs on synthetic CSW data. # 7.1 Prompt-based Mitigation Each noun-token code-switched benchmark instance was prepended with an explicit instruction indicating that the input involves English mixed with an embedded language. Further details on the prompts used per benchmark are provided in Appendix C. Table 5: Impact of an instructional prompt on LLM weighted average accuracy for noun-token codeswitched benchmarks. English serves as the matrix language, with results shown for various embedded languages. The highest scores are indicated in bold The results of the prompt-based mitigation approach, presented in Table 5, show considerable variation across models when compared to unprompted noun-token CSW (Table 1). For some models, most notably the Qwen family, the addition of an explicit instruction led to consistent performance gains. Qwen 72B improved across all language pairs, most remarkably surpassing its monolingual English weighted average accuracy $\mathrm { E N } { } \mathrm { Z H }$ : 0.72 vs. EN: 0.69). Similarly, Qwen 7B also benefited, with $\mathrm { E N } { } \mathrm { Z H }$ improving from 0.57 to 0.59 ( $\Delta = + 0 . 0 2 \$ ). Allam $7 B$ exhibited minor improvements as well, such as ${ \mathrm { E N } } { } { \mathrm { A R } }$ increasing from 0.55 to 0.56 ( $\Delta = + 0 . 0 1 \mathrm { \AA }$ . Conversely, for other models, particularly the Llama family and Mistral $7 B$ , the prompt-based strategy was frequently detrimental. Llama 8B saw weighted average accuracy declines across all embedded languages (e.g., ${ \mathrm { E N } } { } \mathrm { F R }$ dropped from 0.52 to 0.48, $\Delta = - 0 . 0 4 )$ . More substantial drops were observed for Llama 70B, especially on ${ \mathrm { E N } } { } { \mathrm { A R } }$ and $\mathrm { E N } { } \mathrm { Z H }$ , where performance fell by 13 and 17 points respectively. Llama $3 B$ and Mistral $7 B$ similarly exhibited declines (e.g., Llama $3 B$ ${ \mathrm { E N } } { } \mathrm { A R }$ :a $\Delta = - 0 . 1 6 )$ . # 7.2 Model-based Mitigation Directly fine-tuning LLMs on code-switched text presents another avenue for mitigation. For this, Llama 8B was selected, primarily due to its limited responsiveness to prompting within its size category. A parallel corpus of TED Talk transcripts (Qi et al., 2018) spanning English, Arabic, Chinese, French, and German was utilized. The instruction-tuning dataset was constructed by first selecting samples from the parallel corpus where the English sentence length was greater than 70 words. This filtering yielded approximately 3,650 pairs per language combination. Noun-token CSW, with English as a matrix language, was then applied to these, resulting in an instruction-tuning dataset of approximately 14,600 training samples. The instruction required the model to generate the code-switched text from the original English and embedded-language sentences, using five distinct prompt templates to ensure instructions diversity (further details in Appendix D). Figure 3: Comparison of Llama 8B and its instructiontuned variant (CSW-Llama 8B) on monolingual English benchmarks (Belebele, MMLU, and XNLI) versus their noun-token code-switched counterparts. English serves as the matrix language, with Arabic, French, German, and Chinese, as embedded languages. The impact of this instruction fine-tuning is illustrated in Figure 3. The baseline Llama 8B model achieved an English-only weighted average accuracy of 0.59 on the combined benchmarks. Introducing noun-token CSW without fine-tuning resulted in a weighted average accuracy reduction of up to 0.11 points, depending on the embedded language. After fine-tuning on the code-switched corpus (yielding CSW-Llama $\boldsymbol { \delta B }$ ), a partial recovery of performance was observed. The most significant improvement was for the ${ \mathrm { E N } } { } { \mathrm { A R } }$ setting, where the weighted average accuracy increased by $+ 0 . 0 4$ points over the baseline. The smallest gain was for $_ \mathrm { E N F R }$ , with an increase of $+ 0 . 0 3$ points.
Code-switching (CSW) is the act of alternating between two or more languages within a single discourse. This phenomenon is widespread in multilingual communities, and increasingly prevalent in online content, where users naturally mix languages in everyday communication. As a result, Large Language Models (LLMs), now central to content processing and generation, are frequently exposed to code-switched inputs. Given their widespread use, it is crucial to understand how LLMs process and reason about such mixed-language text. This paper presents a systematic evaluation of LLM comprehension under code-switching by generating CSW variants of established reasoning and comprehension benchmarks. While degradation is evident when foreign tokens disrupt English text$\unicode{x2013}$even under linguistic constraints$\unicode{x2013}$embedding English into other languages often improves comprehension. Though prompting yields mixed results, fine-tuning offers a more stable path to degradation mitigation.
[ "cs.CL" ]
# 1 Introduction Code obfuscation involves the transformation of a program into a form intentionally more difficult to understand, targeting both human analysts and automated analysis tools [20]. While the functionality remains identical, the syntax and structure are deliberately altered to obscure the code’s true purpose. These transformations range from simple techniques, such as renaming identifiers or inserting meaningless instructions, to advanced methods, such as control flow flattening or full program encryption [21]. Obfuscation serves legitimate applications, such as protecting intellectual property by preventing reverse engineering, but it is often exploited by malware creators to hide malicious functionality, complicating detection and analysis [22]. This dual use scenario, which serves both defensive software protection and offensive malware development, positions obfuscation as a critical area of cybersecurity research [29]. Reverse engineering binary code typically begins with decompilation - the process of transforming machine code back into higher-level source code representations [30]. Decompilation itself is a complex challenge, requiring the reconstruction of control flow, data types, and program structure from binaries where variable names, comments, and logical organization have been lost during compilation [31]. Building upon decompilation, traditional deobfuscation methods focus on reversing or simplifying the obfuscation to clarify the underlying logic [24]. These typically involve program analysis techniques: static (analyzing code without executing it) [23] and dynamic (running or emulating code) [25, 26]. Static methods employ tools such as IDA [27] or Ghidra [28], analyzing control flow to recognize high-level constructs. Advanced static analysis techniques such as symbolic execution, abstract interpretation, or program slicing analyze logic without execution. Dynamic analysis methods include controlled execution environments or taint analysis to reveal hidden logic at runtime. Usually, hybrid approaches that combine static and dynamic analysis offer the best results. However, traditional methods often require significant manual intervention and struggle against advanced obfuscation, which can render common analysis tools ineffective. This limitation results in an ongoing automation challenge, an ’arms race’ between code protectors and analysts [22]. Large language models (LLMs) have made substantial advances in recent years, transforming natural language processing through models such as BERT [1], GPT [2], and T5 [3]. Using transformer architectures [4], extensive datasets such as The Stack [16], and enhanced computational resources, LLMs now excel in various complex tasks across multiple domains. In software engineering specifically, models like CodeBERT [5] and Codex [17] — the foundational technology behind GitHub Copilot — have demonstrated exceptional performance in code completion, error detection, and multilingual code generation. Recent comprehensive surveys [15] and empirical analyses [17] highlight both their significant capabilities and ongoing limitations. Techniques such as instruction tuning [6] and in-context learning [2] have further boosted their effectiveness. Furthermore, rigorous benchmarks such as RepoBench [7] and LiveCodeBench [8] systematically evaluate their performance, while research on attention mechanisms [9] and model robustness [10] underscores their potential for secure coding applications. Emerging tools such as CodeGen [11] and CodeT $^ { \cdot } 5 +$ [19, 12], alongside ongoing studies [13, 14], continue to push the limits of automated code manipulation, highlighting the intrinsic connection between advancements in natural language processing and programming language analysis [18]. These advancements in code manipulations using large language models (LLMs) might introduce new opportunities for automating deobfuscation tasks. Imagining deobfuscation as a translation task - similar to translating between natural languages - these models can produce more intelligible forms from obfuscated code. Previous studies have successfully used neural machine translation (seq2seq) models to decompile assembly back into high-level languages like C or Go [32], achieving accuracy comparable to traditional decompilers. Such models can rapidly adapt across multiple languages, demonstrating substantial potential for automated reverse engineering. Early explorations into LLM-driven deobfuscation show promise. Lachaux et al. introduced DOBF, a pre-training method that helps models recover source code from obfuscated snippets [33]. Although still limited, DOBF underscores the potential for significant future advances. Unlike deterministic deobfuscators, LLMs provide semantic summaries and high-level interpretations of obfuscated logic, significantly aiding human analysts. However, current LLMs remain prone to inaccuracies or "hallucinated" logic, risking misleading interpretations. However, combining these models with traditional tools has already shown substantial benefits, significantly reducing manual analysis effort [34]. Our research specifically addresses this gap by systematically evaluating current state-of-the-art(as of March 2025) general-purpose LLMs on raw assembly code from real-world obfuscated binaries. Although promising, most previous studies on LLM-driven deobfuscation have focused primarily on high-level languages or direct binary-to-source decompilation, often relying heavily on domain-specific fine-tuning and specialized knowledge such as syntax trees or predefined language constructs [35]. This highlights the need to evaluate general-purpose LLMs directly on disassembled assembly code, a common, yet challenging scenario frequently encountered during binary analysis. We specifically target disassembled native assembly code, which analysts frequently face when high-level decompilers fail. By treating raw assembly instructions as token sequences— analogous to natural language sentences—we uniquely test whether large-scale general-purpose LLMs, trained on broad text and code datasets, can practically automate assembly-level deobfuscation without fine-tuning or specialized domain knowledge, significantly reducing manual analysis efforts. This perspective provides novel insights bridging high-level deobfuscation and detailed binary analysis, highlighting capabilities and limitations previously unidentified. Table 1: Summary of LLM Properties and Reasoning Mechanisms Legend: Context Window $\mathbf { \tau } = \mathbf { \tau }$ CW, Max Output Tokens $\mathbf { \tau } = \mathbf { \tau }$ MOT, Built-in Reasoning $\ c =$ BR. DeepSeek R1 Based on our analysis of current capabilities and gaps in LLM-driven deobfuscation, this research specifically addresses the following questions: • To what extent can general-purpose LLMs successfully analyze and deobfuscate raw assembly code from obfuscated binaries without specialized fine-tuning, and what levels of human intervention are required for different models and techniques? • How do different obfuscation techniques (bogus control flow, instruction substitution, control flow flattening, and combined techniques) affect LLM deobfuscation performance, and what dimensional capabilities (reasoning depth, pattern recognition, noise filtering, context integration) do they primarily challenge? • What specific types of error do LLMs make when attempting to deobfuscate assembly code, and how do these error patterns relate to fundamental limitations in LLMs’ reasoning and pattern recognition capabilities? • What implications do current LLM deobfuscation capabilities have for developing next-generation obfuscation techniques and improved automated deobfuscation tools in the cybersecurity landscape? # 2 Methodology We evaluated eight state-of-the-art commercial Large Language Models (LLM) - GPT-3o Mini High, GPT-4o, GPT-4.5, O1 Pro Mode, DeepSeekR1, Grok3, Grok2, and Claude 3.7 Sonnet - on their ability to analyze and deobfuscate a known C program obfuscated using Obfuscator-LLVM (OLLVM) [37]. These models differ significantly in several dimensions, such as their model size, the scale and nature of their training data sets, the lengths of the context window, and the inherent reasoning capabilities. A detailed comparison of the specific characteristics of each model is provided in Table 1. This specific test case was previously comprehensively analyzed by Quarkslab [36] and was chosen due to its documented complexity and practical relevance, making it an ideal benchmark for a semi-realistic evaluation scenario. Quarkslab successfully reversed OLLVM’s obfuscation protections, such as control-flow flattening, bogus control-flow, and instruction substitution, by employing symbolic execution within the Miasm framework, though their approach required considerable manual effort and specialized expertise to analyze the obfuscated code. Furthermore, the chosen test program includes numerous conditional branches, making it particularly suitable for assessing the effectiveness and resilience of different obfuscation methods. The original unobfuscated C function used in our evaluation is shown in Listing 1. This function implements a straightforward algorithm that computes different arithmetic and bitwise transformations based on the input value modulo 4. Despite its relative simplicity, the function incorporates several characteristics that make it ideally suited for obfuscation testing: multiple conditional branches, diverse bitwise operations, and a distinctive magic constant (0xBAAAD0BF) that serves as a recognizable marker throughout the assembly code. Listing 1: Original unobfuscated C function We compiled this function using OLLVM, applying various obfuscation configurations to produce five distinct binaries. Each binary was disassembled using Capstone [38], producing $\mathrm { x 8 6 } \_ 6 4$ assembly code for our detailed low-level analysis. For reproducibility of our results, we provide the specific compilation flags used for each binary: • code_unobf.capstone: Baseline with no obfuscation. • code_sub.capstone: Instruction substitution applied. Compiled with the following flags: -mllvm -sub • code_fla.capstone: Control flow flattening applied. Compiled with flags: -mllvm -fla -mllvm -perFLA $_ { \mathrm { \Omega } } = 1 0 0$ • code_bcf.capstone: Bogus control flow applied. Compiled with flags: -mllvm -bcf -mllvm -boguscfprob $\scriptstyle 1 = 1 0 0$ -mllvm -boguscf-loop $\scriptstyle 1 = 1$ • code_all.capstone: All three obfuscation techniques combined. Compiled with flags: -mllvm -sub -mllvm -fla -mllvm -perFLA $_ { \mathrm { \Omega } } = 1 0 0$ -mllvm -bcf -mllvm -boguscf-prob $\scriptstyle 1 = 1 0 0$ -mllvm -boguscf-loop $^ { = 1 }$ We specifically focused on $\mathbf { x } 8 6 \_ 6 4$ architecture as it remains the dominant instruction set in desktop and server environments, making it particularly relevant for real-world malware analysis and reverse engineering scenarios. Our evaluation specifically targeted widely available commercial LLMs due to their advanced capabilities and relevance to practical cybersecurity scenarios. Initially, our goal was to perform a statistical analysis to quantify each model’s effectiveness in deobfuscation tasks. Our methodical testing established that certain obfuscation methods were entirely resistant to deobfuscation by these models. We implemented a comprehensive qualitative analysis to thoroughly document the specific errors exhibited by each model. To ensure robustness, each model and scenario combination was extensively tested multiple times to identify common patterns in both successful and failed decompilation attempts. This systematic approach allowed us to select the most representative results for a detailed review. The repeated testing methodology not only improved the reliability of our findings, but also closely simulated realistic attacker constraints, including limited knowledge, resources, and opportunities for experimentation. We structured interactions with the AI models based on incremental attacker knowledge levels defined as follows: • Level 0: No Knowledge Needed—AI fully deobfuscates without assistance. • Level 1: Basic Guidance—Minimal hints to correct minor errors. • Level 2: Structural Correction—Significant guidance needed for structural issues. • Level 3: Major Intervention—Detailed guidance necessary to resolve complex logic errors. • Level 4: Expert Rework—Extensive expert intervention required. • Level 5: Beyond Expert Correction—Errors too fundamental, requiring a complete restart. • No Level(-): Unable to Analyze—AI produced no meaningful output. Detailed dialogues with each model were documented and analyzed to identify patterns in reasoning, common mistakes, and overall effectiveness in the deobfuscation process. Ultimately, our primary objective is to evaluate how commercial AI models might realistically be exploited for code analysis and deobfuscation. This has dual implications for cybersecurity: it highlights risks to legitimate software Table 2: Obfuscation Variants and Required Attacker Knowledge Levels Legend: Bogus Control Flow $\ c =$ BCF, Control Flow Flattening $\mathbf { \Sigma } = \mathbf { \Sigma }$ CFF, Instruction Substitution $= \mathrm { I S }$ . protection mechanisms while also informing potential defensive applications in malware analysis. These findings aim to help legitimate software developers create more resilient obfuscation techniques and security researchers understand the evolving landscape of AI-assisted code analysis tools. # 3 Results The performance of the models evaluated against different obfuscation methods is summarized in Table 2. Each entry indicates the level of attacker knowledge required for successful deobfuscation, based on the criteria defined in Section 2. Higher numerical values indicate greater difficulty and increased human intervention. Also, "-" indicates that the particular scenario is was not applicable. The following sections present a qualitative analysis of model performance against four obfuscation techniques: Bogus Control Flow (BCF), Instruction Substitution (IS), Control Flow Flattening (CFF), and Combined Techniques (All). Our analysis uses the comprehensive Quarkslab analysis [36] as an implicit baseline—their detailed technical breakdown successfully deobfuscated the same code through symbolic execution, but required specialized expertise and significant time investment. We focus on identifying specific error patterns rather than quantitative success rates, using standardized prompts across all models as described in Section 2. Each subsection addresses one obfuscation method, evaluating individual model performance and excluding models that produced no meaningful results. The complete transcripts of our interactions with LLM are not included in this article due to length constraints, but have been made publicly available in a dedicated repository[60] to support reproducibility and further analysis. The subsequent Discussion section (Section 4) synthesizes these findings into a theoretical framework explaining LLM deobfuscation capabilities and their implications for cybersecurity. # 3.1 Bogus control flow This section examines Bogus Control Flow (BCF). This obfuscation aims at complicating reverse engineering by introducing misleading control flow structures in the form of opaque predicates. For a detailed exposition and further insight into this method, see [36, 44]. Among the evaluated models, only Claude 3.7 Sonnet successfully deobfuscated the BCF-protected code on its initial attempt without requiring additional guidance; consequently, it is excluded from the detailed comparative analysis presented herein. # 3.1.1 ChatGPT-4o In our examination of the deobfuscation dialogue with ChatGPT-4o, we identified a significant error in its deobfuscation attempt. The mistake involved misinterpreting an opaque predicate. The incorrect interpretation of the model resulted in errors that predicted the code control flow. The assembly code includes the condition $( \mathrm { v a r } 1 \ \ast \ ( \mathrm { v a r } 1 \ \mathrm { ~ - ~ } \ 1 ) ) \& \ 1 \ \mathrm { ~ = = ~ } \ 0$ , where var1 is loaded from memory at MEMORY[ $\left[ 0 \mathbf { x } 4 0 4 0 0 0 \right]$ . This predicate is always true because the product of two consecutive integers is always even, ensuring the least significant bit is 0. ChatGPT initially misinterpreted this predicate, stating that if this condition or another condition $\mathit { \check { v } a r 2 } < \mathit { 1 0 } ,$ was true (expressed as flag1 || flag2), the program would exit to “END_PROGRAM”. Conversely, it assumed that if the condition failed, the program would jump to address $0 \mathtt { x } 4 0 1 6 9 \mathtt { e }$ , initially labeling this as an exit, then later correctly identifying it as an infinite loop. However, since flag1 (the opaque predicate) always evaluates to true, the combined condition flag1 || flag2 is always true, making the infinite loop at $0 \mathtt { x } 4 0 1 6 9 \mathtt { e }$ unreachable. Relevant assembly snippet is shown in Listing 2 and here is explanation of key steps: • imul eax, edx computes va $\mathbf { { \tau } } _ { \mathrm { { 1 } } } * \mathbf { { \tau } } ( \mathbf { v a r 1 } - \mathbf { { \tau } } _ { 1 } $ ). • and eax, 1 checks the least significant bit. Since the product is always even, this is always 0. • sete dh sets dh to 1 because the result of eax $\scriptstyle = = 0$ is always true. • test dh, 1 and jne $0 \mathtt { x } 4 0 1 0 4 \mathtt { f }$ ensure the jump to $0 \mathtt { x } 4 0 1 0 4 \mathtt { f }$ is always executed, rendering the subsequent jump to $0 \mathbf { x } 4 0 1 6 9 \mathsf { e }$ (the infinite loop) unreachable. Listing 2: Relevant Assembly Snippet Here ChatGPT-4o incorrectly assumed that this condition could fail, producing overly complicated pseudocode with unnecessary conditional checks. As a result provided completely incorrect deobfuscated code Listing 3 : Listing 3: Final output ChatGPT-4o ChatGPT-4o ultimately failed to deobfuscate the assembly accurately due to its misinterpretation of the opaque predicate # 3.1.2 ChatGPT 4.5 The analysis performed with ChatGPT 4.5 began with the examination of memory accesses at addresses $0 \mathbf { x } 4 0 4 0 0 0$ (var1) and $0 \mathbf { x } 4 0 4 0 0 4$ (var2). The model identified the following recurring assembly pattern shown in Listing 4: Listing 4: Relevant Assembly Snippet ChatGPT 4.5 correctly identified this code snippet as an opaque predicate: the condition $( \mathsf { v a r } 1 \ast ( \mathsf { v a r } 1 - 1 ) ) \ell$ $\mathrm { ~ 1 ~ } = = \mathrm { ~ 0 ~ }$ is always true because the product of two consecutive integers is even, thus ensuring branches leading to the address $0 \mathtt { x } 4 0 1 6 9 \mathtt { e }$ are unreachable. Upon explicitly providing the conditions $\mathtt { v a r 1 } \ = \ 0$ and $\mathtt { v a r 2 } = \ 0$ , the model appropriately recognized instructions like cmp ecx, 0xa (checking $\tt { v a r 2 } < \tt { 1 0 } \mathrm { , }$ ) as obfuscation noise, since they always evaluate to true under these conditions. However, ChatGPT 4.5 initially misrepresented the arithmetic operations involved. For instance, it initially produced the following incorrect transformation for case 0 as shown in Listing 5: Listing 5: Incorrect Transformation for Case 0 This statement differs significantly from the original assembly instructions. The correct interpretation, based on the original assembly at address 0x401138, is shown in Listing 6: Listing 6: Correct Transformation for Case 0 After requesting verification, ChatGPT 4.5 subsequently corrected its output, providing the following accurate representation aligned with the assembly in Listing 7: Listing 7: Final output 4.5 This final result correctly matches the logic implemented at addresses $0 \mathbf { x } 4 0 1 1 3 8$ (case 0), $0 \mathtt { x } 4 0 1 2 8 \mathtt { a }$ (case 1), $0 \mathtt { x } 4 0 1 3 \mathtt { d c }$ (case 2), and $0 \mathbf { x } 4 0 1 4 8 6$ (case 3). ChatGPT 4.5 corrected its initial errors with guidance, but required human intervention for accurate results. # 3.1.3 ChatGPT-pro-o1 The ChatGPT-pro-o1 began its analysis by identifying a recurring conditional pattern involving memory locations $0 \mathbf { x } 4 0 4 0 0 0$ (G1) and $0 \mathbf { x } 4 0 4 0 0 4$ (G2), along with arithmetic transformations applied to a local variable (VALUE). The following sections detail its performance. Initially, ChatGPT-pro-o1 correctly recognized four arithmetic transformations from the obfuscated assembly code. These transformations matched the ground truth precisely, despite the presence of obfuscation techniques such as redundant increment/decrement instructions and stack manipulations. The model also successfully identified the opaque predicate $: ( \texttt { G 1 * } ( \texttt { G 1 - 1 } ) ) \texttt { \& 1 } = = \texttt { 0 } | | \texttt { G 2 } < \texttt { 1 0 }$ as always evaluating to true, based on the mathematical property that the product of two consecutive integers is always even. However, despite correctly recognizing this predicate initially, ChatGPT-pro-o1 treated it as part of an iterative logic rather than a consistently true condition. The model encountered the same assembly pattern previously shown in Listing 4. This condition, observed multiple times (e.g., at $0 \mathbf { x } 4 0 1 0 0 9$ and $0 \mathbf { x } 4 0 1 0 8 \mathbf { c }$ , consistently evaluates to true at $0 \mathbf { x } 4 0 1 0 4 4$ . However, ChatGPT-pro-o1 initially interpreted these repetitions as indicative of essential iterative logic rather than obfuscation redundancy. Furthermore, instead of identifying the concise switch-like structure at address $0 \mathbf { x } 4 0 1 0 7 4$ , triggered by data & 3, ChatGPT-pro-o1 initially proposed a complex chain of conditional checks with repeated transformations. In contrast, the actual assembly implements a straightforward conditional structure based on data $\& \ 3$ at $0 \mathbf { x } 4 0 1 0 7 4$ , selecting exactly one transformation (such as the one at $0 \mathbf { x } 4 0 1 1 3 8 ^ { \circ }$ and then terminating via a single return instruction at $0 \mathtt { x } 4 0 1 6 9 \mathtt { d }$ . ChatGPT-pro-o1’s initial interpretation diverged by incorrectly modeling the process as iterative. After further prompting and clarification, ChatGPT-pro-o1 corrected its interpretation and provided an accurate deobfuscated implementation that correctly aligns with the actual assembly logic, structured as a switch statement dependent on input & 3. ChatGPT-pro-o1 excelled at recognizing arithmetic patterns in obfuscated code but initially misread redundant contro flow as iterative logic, correcting it with guidance. The o3-mini-high began by analyzing recurring conditional checks involving global variables located at memory addresses $0 \mathbf { x } 4 0 4 0 0 0$ (param1 $\mathbf { \sigma } = \mathbf { \sigma }$ var1) and $0 \mathbf { x } 4 0 4 0 0 4$ (param $2 =$ var2). Specifically, it identified the same assembly sequence previously analyzed for ChatGPT-4.5, presented in Listing 4. The model correctly recognized this as an opaque predicate, ((param1 - 1) $*$ param1) $\begin{array} { l } { \displaystyle \& \mathrm { ~ \ 1 ~ \ = = ~ 0 ~ } } \end{array}$ , which consistently evaluates to true due to the mathematical property that the product of two consecutive integers is always even. As a result, the branches to the address $0 \mathbf { x } 4 0 1 6 9 \mathsf { e }$ were correctly identified as unreachable. Despite correctly identifying this opaque predicate, the model subsequently represented the control flow incorrectly as a looping state machine. Its initial interpretation is presented in the pseudocode snippet below Listing 8: function obfuscated_function (): 2 while True : 3 flag1 $\mathbf { \sigma } = \mathbf { \sigma }$ ((( param1 - 1) \* param1 ) & $\mathrm { ~ \ 1 ~ \ } = = \mathrm { ~ \ 0 ~ }$ ) or ( param2 < 10) 4 if not flag1 : goto cleanup_and_return 5 local_value $\mathbf { \tau } = \mathbf { \tau }$ get_some_local_value () 6 if ( control_byte & 1) $\scriptstyle = =$ 0: 7 if not flag2 : goto cleanup_and_return 8 local_value $\mathbf { \tau } = \mathbf { \tau }$ transform_variant_A ( local_value ) 9 else : 10 if not flag3 : goto cleanup_and_return 11 local_value $\mathbf { \tau } = \mathbf { \tau }$ transform_variant_B ( local_value ) 12 local_value $\mathbf { \sigma } = \mathbf { \sigma }$ complex_transform ( local_value ) 13 if termination_condition_reached (): break 14 cleanup_and_return : 15 return After providing explicit conditions $( \mathtt { p a r a m 1 } \ = \ 0 .$ , param2 $\mathit { \Theta } = \mathit { \Theta } 0$ ) and highlighting hidden constants (e.g., 0xbaaad0bf, 2, 3, 4, 5), the model produced a more structured version. However, it continued to interpret the assembly as a loop-based state machine. Further prompts requesting clarification and restructuring resulted in the following representation, which remained incorrect Listing 9: Listing 9: o3-mini final try This final output differs significantly from the actual assembly logic, which implements a single-pass switch determined by the expression data $\& \ 3$ at address $0 \mathbf { x } 4 0 1 0 7 4$ . The correct behavior, confirmed by manual analysis, executes exactly one arithmetic transformation (for example, at address $0 \mathbf { x } 4 0 1 1 3 8$ for case 0) and terminates with a single return instruction at $0 \mathbf { x } 4 0 1 6 9 \mathsf { d }$ . However, the model repeatedly interpreted the assembly as an iterative construct, a structure not present in the original code. Despite clarifications, ’o3-mini-high’ persistently misread the non-iterative assembly as a loop, accurately spotting transformations but missing the single-pass structure. # 3.1.5 Grok 3 In Grok 3 initial attempt, Grok 3 interpreted the assembly code as containing a functional loop driven by conditions involving variables A and B, generating the pseudocode shown in Listing 10: Listing 10: Grok 3 initial interpretation However, considering the explicitly provided conditions $\mathtt { A } \ = \ 0$ and $\texttt { B } = \texttt { O }$ , the loop condition $( \mathsf { A } \ > \ 1$ AND $\texttt { B } < \texttt { 1 0 } _ { , }$ evaluates to false, because: A $> 1$ is false (since $0 > 1$ is false), $\texttt { B } < \texttt { 1 0 }$ is true (since $0 < 1 0$ is true). Therefore, the combined condition (false AND true) evaluates to false. Thus, under these explicit conditions, the loop never executes, contradicting Grok 3’s initial interpretation. Upon receiving further guidance explicitly stating that $[ 0 { \bf x } 4 0 4 0 0 0 ] = 0$ and $[ 0 { \bf x } 4 0 4 0 0 4 ] = 0$ , Grok 3 revised its analysis, removing the loop structure and accurately producing the pseudocode shown in Listing 11: Listing 11: Grok 3 final deobfuscation This final pseudocode correctly reflects the assembly logic, which involves performing exactly one arithmetic transformation based on the condition x & 3. Each conditional jump involving variables A and B consistently results in a single execution path rather than iterative looping. Grok 3 accurately deobfuscated the code with explicit guidance, though initial errors suggest condition clarity is crucial. # 3.1.6 DeepSeek R1 Similar to some other models, DeepSeek R1 incorrectly encased the deobfuscated logic within a while loop Listing 12: Despite prompts to reassess its deobfuscation, R1 persisted with a loop, simplifying it to a for loop with exactly 10 iterations: Listing 13: DeepSeek R1 incorrect loop Although the switch statement itself was correctly interpreted, the erroneous outer loop caused the transformation to be applied 10 times, repeatedly altering the result variable as shown in Listing 14: Listing 12: DeepSeek R1 initial error Listing 14: DeepSeek R1 Last Attempt DeepSeek R1’s misinterpretation of the bogus control flow as requiring a concrete loop structure (forcing exactly 10 iterations) demonstrates a fundamental failure in recognizing the non-iterative nature of the original algorithm. # 3.2 Instruction Substitution In this section, we analyze the obfuscation known as Instruction Substitution (IS). This type of obfuscation aims at substituting binary operators with equivalent but more complex sequences of instructions. For a comprehensive description and additional details regarding this obfuscation technique, refer to [36, 44]. As shown in Table 2, GPT-Pro-o1 and GPT-3o Mini failed completely, unable to provide any meaningful results. # 3.2.1 ChatGPT 4o The ChatGPT 4o correctly identified the underlying switch structure based on the condition input & 3. However, the arithmetic and bitwise transformations derived by ChatGPT 4o differed substantially from the logic explicitly implemented by the assembly blocks located at addresses 0x27, 0xb7, 0x113, and 0x1ac. The final deobfuscation produced by ChatGPT 4o is shown in Listing 15: A detailed analysis shows substantial inaccuracies in ChatGPT 4o’s approach as seen in Listing 15. In the first case, the model incorrectly introduced a bitwise XOR operation involving an unrelated constant (0xe6c98769), deviating from the correct bitwise OR operation with constant 0xBAAAD0BF as represented in the original assembly. The second case was significantly simplified by ChatGPT 4o, resulting in a constant zero, whereas the actual assembly involves necessary bitwise AND operations followed by arithmetic multiplication, neither of which were correctly represented by the model. In the third case, ChatGPT 4o erroneously introduced a constant (0x811d5b51) and incorrectly replaced the intended bitwise OR-based multiplication with alternative operations. Finally, in the fourth case, although the addition of the constant 0xBAAAD0BF was accurately identified, the model incorrectly applied a bitwise XOR operation with the constant 0xf4577a2a instead of the correct bitwise AND operation specified by the original assembly. Overall ChatGPT 4o failed to successfully deobfuscate the code. # 3.2.2 GPT 4.5 The ChatGPT 4.5 correctly identified the switch structure conditioned on input & 3. However, the arithmetic and bitwise transformations it proposed differ significantly from those explicitly represented in the original assembly. Specifically, the final output from ChatGPT 4.5 is shown in Listing 16: Listing 15: Instruction Substitution o4 final deobfuscation attempt Listing 16: Instruction Substitution 4.5 final deobfuscation attempt An in-depth analysis of ChatGPT 4.5’s proposed implementation in Listing 16 shows systematic deviations from the original logic. In the first case (address $0 \mathbf { x } 2 7$ ), the model incorrectly introduced bitwise inversion ( input), extraneous XOR operations (with constants 0xE6C98769 and 0xBAAAD0BF), and an additional arithmetic offset $( + ~ 2 )$ . These operations are not present in the original assembly logic, which correctly employs a simple bitwise OR operation combined with XOR-based multiplication. In the second case (address 0xb7), the assembly correctly implements a bitwise AND operation followed by arithmetic addition. However, ChatGPT 4.5 inaccurately replaced these operations with bitwise inversion, OR-based operations involving unrelated masks (0x8ABD1CD5), and an arithmetic subtraction (input - 3), significantly deviating from the actual code behavior. For the third case (address $0 \mathbf { x } 1 1 3 ^ { \cdot }$ ), the original assembly clearly specifies a straightforward XOR operation using the constant 0xBAAAD0BF, multiplied by a bitwise OR operation involving the input. Conversely, ChatGPT 4.5 introduced multiple redundant XOR operations (e.g., $0 \mathtt { x } 0 \mathtt { F } 6 0 3 \mathtt { E } 3 5 \$ , extraneous constants (0x811D5B55), and unnecessarily complex arithmetic, deviating substantially from the correct logic. Lastly, in the fourth case (address 0x1ac), rather than accurately applying a simple arithmetic addition followed by a bitwise AND operation, the model proposed incorrect arithmetic negation, unnecessary bitwise inversion, and irrelevant OR-based operations with an unrelated constant (0xF4577A2A), misrepresenting the original code significantly. ChatGPT 4.5 failed to accurately deobfuscate the assembly due to significant errors in arithmetic and bitwise transformations despite identifying the switch structure. # 3.2.3 Grok 3 Grok 3 made multiple attempts to deobfuscate the provided assembly code, while its correctly identified the switch structure based on input & 3, its interpretation of computational logic significantly deviated from the intended functionality. Grok 3’s final deobfuscation attempt resulted in the implementation shown in Listing 17: Listing 17: Instruction Substitution Grok3 final deobfuscation attempt Upon detailed verification, Grok 3’s results shown in Listing 17 showed partial accuracy. Specifically, in cases 1 and 3, Grok 3 precisely matched the intended assembly logic. For case 1, it correctly implemented the bitwise AND operation with 0xBAAAD0BF, followed by arithmetic addition $3 +$ input) and multiplication. Similarly, in case 3, the arithmetic addition with 0xBAAAD0BF followed by multiplication with the result of a bitwise AND operation (input & 5) was accurately represented. In contrast, significant deviations occurred in cases 0 and 2. In case 0, Grok 3 incorrectly simplified the original logic. The intended assembly explicitly involves a bitwise OR with 0xBAAAD0BF and multiplication using an XOR operation with the constant 2. Grok 3, however, mistakenly replaced these operations with a single bitwise XOR operation (input ˆ 0xBAAAD0BF) followed by an incorrect bitwise AND with an unrelated constant (0xE6C98769), thus significantly altering the intended arithmetic logic. For case 2, the intended assembly logic involves a bitwise XOR operation with 0xBAAAD0BF followed by arithmetic multiplication involving a bitwise OR operation with a small constant. Instead, Grok 3 inaccurately applied a bitwise XOR with an extraneous constant (0x0F603E35), followed by a bitwise AND with 0xBAAAD0BF. These inaccuracies substantially differ from the intended computational steps present in the assembly. So, despite success in some cases, Grok 3 ultimately failed to deobfuscate the assembly due to critical errors in computational logic. # 3.2.4 DeepSeek R1 The DeepSeek R1 correctly identified the high-level switch structure based on input & 3, but demonstrated significant difficulties in accurately simplifying the bitwise and arithmetic operations within each case, as shown in Listing 18: unsigned int secret_function ( unsigned int n) { 2 switch (n & 3) { // Original comparisons -> n%4 3 case 0: // Block 0 x27 4 return ((0 xBAAAD0BF | n) ^ (0 xE6C98769 & \~n) ^ 2) \* 5 ((n & ((0 xBAAAD0BF | n) ^ (0 xE6C98769 & \~n))) | 6 (\~n & 2)); 7 case 1: { // Block 0 xb7 8 unsigned int t = (\~n | 0 xBAAAD0BF ) & 0 x8ABD1CD5 ; 9 return t \* (n - 3); 10 } 11 case 2: { // Block 0 x113 12 unsigned int mask $\mathbf { \tau } = \mathbf { \tau }$ 0 x00F603E35 ; // Extended to 0 xF603E35 13 unsigned int a $\mathbf { \sigma } = \mathbf { \sigma }$ (n & 0 xBAAAD0BF ) | (\~n & mask ); 14 unsigned int b $\mathbf { \sigma } = \mathbf { \sigma }$ (n 0 x811D5B51 | 0 x7EE2A4AE ) & 0 x811D5B51 ; 15 return a $^ *$ b; 16 } 17 default : { // Block 0 x1ac ( case 3) 18 unsigned int magic $\mathbf { \sigma } = \mathbf { \sigma }$ 0 xF4577A2A ; 19 unsigned int math $\mathbf { \tau } = \mathbf { \tau }$ -(n + 0 xBAAAD0BF ); // Explicit wraparound 20 unsigned int bits $\mathbf { \Sigma } = \mathbf { \Sigma }$ (n | \~n ^ 5) & magic ; 21 return math $^ *$ bits ; 22 } 23 } Comparing the output in Listing 18 with the reference implementation reveals substantial inaccuracies across all cases: Case 0: The reference implementation uses the simple expression (n | 0xBAAAD0BF) $*$ $( 2 \ \sim \ \mathtt { n } )$ , but DeepSeek R1 produced an unnecessarily complex expression involving multiple nested operations: ((0xBAAAD0BF | n) ˆ (0xE6C98769 & \~n) ˆ 2) $*$ ((n & ((0xBAAAD0BF | n) ˆ (0xE6C98769 & $\tilde { \mathbf { \Gamma } } _ { \mathtt { n } } )$ )) | $( { \widetilde { \mathbf { \Gamma } } } _ { \mathbf { \widetilde { n } } } \ \& \ \ 2 ) .$ ), introducing extraneous constants and operations not present in the original code. Case 1: While closer to the reference, DeepSeek R1 still unnecessarily complicated the expression with (\~n | 0xBAAAD0BF) & $0 { \tt x } 8 { \tt A B D 1 } { \tt C D 5 }$ instead of the simpler (n & 0xBAAAD0BF). The $( { \tt n } \mathrm { ~ - ~ } 3 )$ component is correctly identified but the overall expression remains needlessly complex. Case 2: DeepSeek R1 significantly diverged from the reference with a multi-step computation involving temporary variables and unrelated constants. The expression $\mathbf { \tau } ( \mathbf { n } \ \hat { \mathbf { \Omega } } ) { \mathrm { { 0 } } } \mathbf { x } 8 1 1 \mathrm { { D 5 } } \mathbf { B } 5 1$ | 0x7EE2A4AE) & 0x811D5B51 is particularly problematic as it introduces constants not present in the original code and fails to capture the simple ( $\mathbf { \dot { n } }$ 0xBAAAD0BF) $* ( 4 \mid \mathfrak { n } )$ logic. Case 3: The model incorrectly represented the logic as - $( \mathtt { n } ~ + ~ \mathtt { O x B A A A D O B F } )$ $*$ $\mathrm { ~ \chi ~ } _ { 1 } \sim _ { \tt { n } } \cdot { \tt { 5 } } )$ & $0 \mathbf { x } \mathbf { F } 4 5 7 7 \mathbf { A } 2 \mathbf { A } )$ ) instead of the correct $( { \tt n } \mathrm { ~ + ~ } 0 { \tt x B A A D O B F } )$ ) $*$ $\texttt { ( 5 ~ \& ~ n ) }$ , correctly identifying the addition operation but introducing sign inversion and unnecessarily complex bitwise operations. Interestingly, when presented with isolated bitwise expressions outside the context of assembly deobfuscation, DeepSeek R1 was able to simplify them correctly, suggesting the model’s difficulty stemmed from the contextual complexity of assembly analysis rather than an inherent inability to handle bitwise operations. DeepSeek R1 failed to successfully deobfuscate the assembly due to inaccurate simplification of bitwise and arithmetic operations despite identifying the switch structure. # 3.2.5 Claude 3.7 Sonnet Unlike other models, Claude 3.7 Sonnet took a distinctive approach by completely removing all the obfuscation constants and deriving a highly simplified mathematical pattern as shown in Listing 19: Listing 19: Instruction Substitution Claude final deobfuscation attempt 1 int function (int x) { 2 int remainder = x & 3; // equivalent to x % 4 3 int result ; 4 switch ( remainder ) { 5 case 0: 6 // The XOR / AND /OR sequence constructs (x ^ 2) 7 // -1 ^ $2 = - 3$ 8 result $\mathbf { \Sigma } = \mathbf { \Sigma }$ ( -3) \* (x ^ 2); 9 break ; 10 case 1: 11 // All the bit manipulations simplify to this 12 result $\mathbf { \Sigma } = \mathbf { \Sigma }$ x \* (x + 3); 13 break ; 14 case 2: 15 // The XOR / AND /OR sequence constructs (x ^ 4) 16 // -1 ^ 4 = -5 17 result $\mathbf { \sigma } = \mathbf { \sigma }$ ( -5) \* (x ^ 4); 18 break ; 19 case 3: 20 // Similar bit manipulations lead to this 21 result $\mathbf { \sigma } = \mathbf { \sigma }$ x \* (x + 5); 22 break ; 23 } 24 return result ; 25 } Claude’s approach in Listing 19 stands in stark contrast to other models, which preserved the original obfuscation constants (0xBAAAD0BF, 0xE6C98769, 0x8ABD1CD5, 0xF603E35, 0xF4577A2A) in their deobfuscated code. Claude instead identified an elegant mathematical pattern: • For even remainders (0, 2): result $\mathbf { \tau } = \mathbf { \tau }$ (-1-remainder) \* $\mathbf { \rho } _ { \mathrm { ~ X ~ } } ^ { \star }$ (remainder+2)) • For odd remainders (1, 3): result ${ \bf \mu } = { \bf { X } }$ \* ( ${ \bf \boldsymbol { x } } + { \bf \cdot }$ (remainder+2)) Claude’s approach in Listing 19 differs from other models by replacing the original obfuscation constants with a simple mathematical pattern based on remainder values. While this higher level of abstraction elegantly simplifies the code, it remains unverified whether this transformation preserves the exact behavior of the original assembly across all inputs without empirical validation. # 3.3 Control flow flattening In this section, we analyze the type known as Control flow flattening (CFF). This type of obfuscation aims at obfuscating the control flow by flattening the control flow graph. For a comprehensive description and additional details regarding this obfuscation technique, refer to [36, 44]. In this section we ignore several models like Grok 3, Chat o1, and ChatGPT 4.5 which successfully deobfuscated the code by correctly identifying the flattened switch structure and accurately reconstructing the computational logic without requiring additional guidance. # 3.3.1 Grok 2 Grok 2 produced the following code as its final deobfuscation attempt Listing 20: Upon verification, Grok 2’s implementation in Listing 20 accurately identified the switch structure determined by the two least significant bits of the input (input $\alpha \ 3 )$ ) and correctly derived each arithmetic and bitwise operation from the assembly. However, it incorrectly associated these derived expressions with their respective conditional paths, leading to mismatched logic in three out of four cases. In detail, the computation (input $^ +$ 0xbaaad0bf) $*$ (input $\&$ 5) correctly corresponds to the original logic for the condition input & $3 \ \ b = \ 3 \quad$ , yet Grok 2 mistakenly assigned it to the condition input $\& \ 3 \ = \ 0$ . Similarly, the expression (input | 0xbaaad0bf) $*$ (input ˆ 2), originally intended for condition input & $3 \ \mathrm { { \Omega } } = = \ 0$ , was incorrectly assigned to condition input & $3 \ \mathrm { { = } } \ 1$ . Additionally, Grok 2 placed the expression (input & 0xbaaad0bf) $*$ (input $+ 3 { \mathrm { } }$ ), correctly intended for input $\texttt { \AA 3 } = = \texttt { 1 }$ , under condition input & $3 \ \mathrm { { = } } \ 3 \$ . The only correct conditional assignment was the expression (input ˆ 0xbaaad0bf) $^ *$ (input | 4) correctly matched with the intended condition input $\& \ 3 \ = \ 2$ . The original obfuscation employed control flow flattening, using an intricate state machine initialized with the constant $0 \mathbf { x } 6 4 \mathbf { f } \mathsf { d } 8 \mathsf { a } 9 6$ . The assembly exhibited complex conditional jumps and redundant arithmetic instructions (e.g., repeated subtraction at addresses $0 \mathtt { x } 4 6$ and $0 \mathtt { x } 5 \mathtt { c }$ ), deliberately complicating control flow tracing. The presence of these complexities suggests challenges faced by Grok 2 in correctly mapping computational logic to conditional execution paths. Grok 2 successfully isolated the correct arithmetic and bitwise transformations from heavily obfuscated code, demonstrating proficiency in arithmetic interpretation. Nonetheless, it showed a consistent pattern of errors in accurately associating these transformations with their corresponding control-flow conditions, specifically misaligning three out of four cases. Analysis reveals that Grok 2 failed to accurately deobfuscate the assembly code, primarily due to mismatched conditional logic, although it correctly identified the underlying arithmetic operations. # 3.3.2 ChatGPT 4o ChatGPT 4o made multiple attempts to deobfuscate the provided assembly. Its final deobfuscation attempt is presented in Listing 21: # Listing 21: Control flow flattening ChatGPT 4o final deobfuscation attempt The final output from ChatGPT 4o shown in Listing 21 correctly identified the intended switch structure based on the input condition input & 3, accurately reproducing the arithmetic and bitwise transformations present in the original assembly logic. However, during its initial analysis, the model produced incorrect interpretations by misrepresenting the obfuscated assembly as a complex, nested conditional structure rather than recognizing the straightforward four-case logic dictated explicitly by input & 3. Initially, ChatGPT 4o introduced unnecessary intermediate state variables (such as 0xA4431FAA) and misidentified temporary computation values (e.g., 0xE9A524FA) as potential final outputs, instead of correctly tracing the final result stored at [ebp - $_ { 0 \mathrm { x } 1 0 ] }$ . After iterative prompting, explicitly clarifying that the control flow involved exactly four cases based on input & 3 and specifying the correct output variable $\left( [ \mathbf { e b p } \ - \ 0 \mathbf { x } 1 0 ] \right) ,$ , ChatGPT 4o adjusted its interpretation and accurately represented the arithmetic and bitwise operations in their intended structure. ChatGPT 4o successfully deobfuscated the assembly with guidance but initially failed without it. # 3.3.3 DeepSeek R1 The DeepSeek R1 initially interpreted the assembly as implementing a complex state machine with multiple transitions and conditional branches. In its first deobfuscation attempt, DeepSeek R1 produced a verbose pseudocode that, while technically accurate in tracing the state transitions, failed to identify the underlying simplicity of the algorithm. The model’s initial representation included unnecessary state variables and transition logic as shown in Listing 22: int obfuscated_function ( int input ) { 2 int state $\mathbf { \sigma } = \mathbf { \sigma }$ 0 x64FD8A96 ; 3 int result $\mathit { \Theta } = \mathit { \Theta } 0$ ; 4 int mod $\mathbf { \Sigma } = \mathbf { \Sigma }$ input & 3; // Equivalent to input % 4 5 while (1) { 6 switch ( state ) { 7 case 0 x64FD8A96 : 8 state $\mathbf { \Sigma } = \mathbf { \Sigma }$ ( mod $\scriptstyle = = 0$ ) ? 0 x707B18BC : 0 xD5393A07 ; 9 break ; 10 // Many more state transitions ... 11 } 12 } 13 } When prompted by the user with the simple instruction "knowing all that, simplify the method as much as possible," DeepSeek R1 immediately recognized the redundancy in its state machine representation and produced the fully simplified implementation shown in Listing 23: Listing 23: Control flow flattening DeepSeek R1 final implementation DeepSeek R1 showed a solid grasp of the assembly code’s core algorithm in Listing 23, correctly identifying all four cases based on input modulo 4 and their corresponding operations. Overall it successfully deobfuscated the assembly with prompting but initially failed to simplify it autonomously. # 3.3.4 Claude 3.7 Sonnet Claude demonstrated exceptional deobfuscation capabilities by correctly identifying the core algorithm on its first attempt. The model accurately recognized that the obfuscated assembly implemented a simple switch structure based on input modulo 4, successfully cutting through the complex state machine and control flow obfuscation, as shown in Listing 24: # Listing 24: Control flow flattening Claude final deobfuscation attempt As shown in Listing 24, Claude 3.7 Sonnet successfully deobfuscated the assembly but introduced a minor constant error which was later fixed. Specifically, Claude initially misidentified the magic constant value (0xbaaa d0bf) instead of (0xbaad0bf), which was subsequently corrected to ensure the deobfuscated code would produce identical results to the original assembly. # 3.4 Combined Techniques In this section, we analyze Combined Techniques (All), meaning bogus control flow, instruction substitution and control flow flattening applied to the same code. For a comprehensive description and additional details regarding this obfuscation technique, refer to [36, 44]. We are not testing weaker models like GPT-4o, GPT-3o Mini, and Grok2 here due to their particularly poor overall performance. # 3.4.1 ChatGPT 4.5 In the initial analysis ChatGPT 4.5 misrepresented the assembly as a looping state machine involving a perpetual loop (while(1)) combined with a switch statement. However, manual verification confirmed that the actual assembly implements a single-pass switch structure beginning at address $0 \mathtt { x } 1 5 6 \mathtt { b }$ , without looping behavior. Additionally, the assembly includes reads from address [0] at instructions 0x14 and 0x1b, consistently returning zero at runtime and significantly simplifying the logic. Despite this simplification, ChatGPT 4.5 initially treated redundant obfuscation instructions, such as $\mathbf { j } \mathtt { m p } \ 0 \mathbf { x } 7 \mathtt { d } 8$ , as indicators of functional loops. In a subsequent attempt, ChatGPT 4.5 correctly identified the high-level control structure based on the expression input & 3. However, the model erroneously assigned arbitrary constants (0xd87b0953, $0 \mathtt { x 3 e 2 a 6 a 0 d }$ ), disregarding explicit computational sequences present after address $0 \mathrm { x } 1 5 4 6$ . Specifically, at address 0x1585, assembly instructions explicitly reference the constant 0xBAAAD0BF, yet ChatGPT 4.5 overlooked these operations by prematurely concluding its analysis based on intermediate jump table assignments observed at address $0 \mathtt { x } 1 5 7 9$ . In its final attempt, ChatGPT 4.5 produced the C implementation shown in Listing 25: unsigned int function ( unsigned int input ) { 2 unsigned int result ; 3 switch ( input & 3) { 4 case 0: 5 result $\mathbf { \sigma } = \mathbf { \sigma }$ (( input & 0 xBAAAD0BF ) | ( input ^ 0 xBAAAD0BF )) \* (\~ input & 2); 6 break ; 7 case 1: 8 result $\mathbf { \tau } = \mathbf { \tau }$ input - 1; 9 break ; 10 case 2: 11 result $\mathbf { \sigma } = \mathbf { \sigma }$ (\~ input ^ 0 xBAAAD0BF ) \* ( input + 0 xAA61A945 ); 12 break ; 13 case 3: 14 result $\mathbf { \tau } = \mathbf { \tau }$ input - 2; 15 break ; 16 } 17 return result ; 18 } This final implementation in Listing 25 deviated substantially from the logic verified in the original assembly. For instance, the assembly at address $0 \mathbf { x } 1 5 8 5$ clearly illustrates operations such as: mov esi, edx and esi, 0xbaaad0bf or esi, edx This sequence corresponds explicitly to (input & 0xBAAAD0BF) | (input ˆ 0xBAAAD0BF), yet ChatGPT 4.5 incorrectly introduced an unnecessary NOT operation ( input) and an inappropriate multiplier ( input & 2). Similarly, the assembly logic at case 1, representing the operation (input & 0xBAAAD0BF) $*$ (input $+ 3 \cdot$ ), was oversimplified by ChatGPT 4.5 into merely input - 1, entirely omitting the required arithmetic and bitwise steps. Furthermore, at $0 \mathtt { x } 1 5 \mathtt { e } 9$ , the verified assembly correctly implements (input ˆ 0xBAAAD0BF) $*$ (input | 4), but ChatGPT 4.5’s final interpretation introduced an extraneous constant (0xAA61A945) and a NOT operation, thus deviating significantly from the verified instructions. Lastly, for the computation explicitly performed at $0 \mathrm { x } 1 6 4 4$ —(input $^ +$ 0xBAAAD0BF) $*$ (input & 5)—ChatGPT 4.5 simplified incorrectly to input - 2, entirely omitting essential arithmetic and bitwise operations explicitly evident in the assembly instructions. Overall ChatGPT 4.5 failed to accurately deobfuscate the assembly despite recognizing the switch structure, producing incorrect arithmetic and bitwise operations. # 3.4.2 ChatGPT-pro-o1 In its initial attempt, ChatGPT o1-proincorrectly represented the assembly logic as a looping state machine, as illustrated in Listing 26: Listing 25: All obfuscation final deobfuscation attempt 4.5 Listing 26: All obfuscation initial o1-pro attempt In this representation shown in Listing 26, o1-pro incorrectly inferred a persistent loop, misled by jump instructions (such as from $0 \mathbf { x } 1 7 0 \mathbf { d }$ to $0 \mathbf { x } 6 \mathbf { b }$ ). The actual assembly, however, executes a single-pass switch statement based explicitly on input & 3 starting at address $0 \mathrm { x } 1 5 6 \mathrm { b }$ . Additionally, reads from memory location [0] consistently yield zero, simplifying the analysis by resolving certain conditional checks at runtime. In subsequent iterations, o1-pro correctly identified the single-pass switch based on input & 3, yet incorrectly interpreted intermediate static constant assignments as final computational results, illustrated in Listing 27: Listing 27: All obfuscation intermediate result o1-pro Listing 28: All obfuscation final deobfuscation attempt o1-pro These static assignments in Listing 27 (e.g., mov dword ptr [ebp - $0 \mathbf { x } 5 0 ]$ , $0 \mathbf { x } \mathbf { d } 8 7 \mathbf { b } 0 9 5 3$ at $0 \mathbf { x } 1 5 7 9 \$ ) are components of obfuscation noise rather than definitive outputs. Critical computational logic, such as arithmetic operations involving constants like 0xBAAAD0BF, appear later in the assembly beyond address $0 \mathrm { x } 1 5 4 6$ , specifically in code blocks at addresses $\mathtt { 0 x 1 5 8 5 }$ , $0 \mathtt { x } 1 5 \mathtt { e } 9$ , $0 \mathtt { x } 1 6 5 \mathtt { b }$ , and $_ { 0 \tt x 1 6 a 7 }$ . ChatGPT o1-pro consistently neglected these dynamic computational sequences. The final deobfuscation attempt by o1-pro produced the incorrect approximation shown in Listing 28: unsigned int function ( unsigned int input ) { 2 unsigned int transformed $\mathit { \Theta } = \mathit { \Theta } 0$ ; 3 switch ( input & 3) { 4 case 0: transformed $\mathbf { \sigma } = \mathbf { \sigma }$ (( input ^ 0 xBAAAD0BF ) & 0 xBAAAD0BF ) | (( input & \~( input ^ 2)) 2); break ; 5 case 1: transformed $\mathbf { \sigma } = \mathbf { \sigma }$ ((\~ input ^ 0 xBAAAD0BF ) & (\~ input | (0 - 3))) $^ +$ (\~ input & (0 - 3)); break ; 6 case 2: transformed $\mathbf { \sigma } = \mathbf { \sigma }$ ((( input ^ 0 xBAAAD0BF ) & \~( input ^ 0 xBAAAD0BF )) ^ \~( input ^ 4)) & 4; break ; 7 case 3: transformed $\mathbf { \sigma } = \mathbf { \sigma }$ (0 - 0 xBAAAD0BF - input ) & ( input & 5); break ; 8 } 9 return transformed - 1; 10 } Although this version in Listing 28 recognized the single-pass switch, the arithmetic and bitwise transformations differed significantly from the assembly’s actual logic. At address $0 \mathbf { x } 1 5 8 5$ , the assembly explicitly computes (input $\&$ 0xBAAAD0BF) | (input ˆ 0xBAAAD0BF) followed by a multiplication involving input & 2, yet o1-pro introduced unrelated bitwise combinations without the necessary arithmetic multiplication. Similar inaccuracies occurred at addresses $\mathtt { 0 x 1 5 6 9 }$ , $0 \mathtt { x } 1 6 5 \mathtt { b }$ , and $0 \mathtt { x } 1 6 \mathtt { a } 7$ , where essential arithmetic multiplications and logical operations involving constants like 0xBAAAD0BF were incorrectly replaced by o1-pro with oversimplified bitwise logic. Overall ChatGPT-pro-o1 failed to accurately deobfuscate the assembly despite identifying the switch structure, misrepresenting key arithmetic operations. # 3.4.3 Grok 3 Initially, Grok 3 incorrectly modeled the assembly code as a looping state machine with constant states, exemplified by the pseudocode in Listing 29: Listing 29: All obfuscation initial step Grok 3 In reality, the assembly implemented a single-pass switch structure at address $0 \mathrm { x } 1 5 6 \mathrm { b }$ , determined solely by input & 3. The model’s misunderstanding stemmed primarily from misleading jump instructions and constant-value comparisons (e.g., sub eax, 0x8f77bef6), as well as consistent zero-value memory reads from address [0]. Subsequently, Grok 3 further simplified its interpretation to a constant return function: int simplified_function() {return 0xd87b0953;} This version entirely neglected dynamic computations dictated by input & 3, incorrectly assuming a static result. In a later attempt, Grok 3 correctly identified the underlying switch-based logic at address $0 \mathrm { x } 1 5 6 \mathrm { b }$ but inaccurately represented computational operations with generic placeholders, as demonstrated in Listing 30: # Listing 30: All obfuscation intermediate step Grok 3 This code in Listing 30 significantly deviated from the actual arithmetic operations present in the assembly, particularly those involving constants like 0xBAAAD0BF and explicit arithmetic multiplications (imul). In its final iteration, Grok 3 presented the implementation shown in Listing 31: Listing 31: All obfuscation final step Grok 3 Although the final version in Listing 31 correctly reflected the conditional structure based on input $\& \ 3$ , it deviated substantially from the precise computations in the original assembly. For case 0, Grok 3 introduced a bitwise negation ( input), an incorrect bitwise OR operation, and an improper multiplier, instead of accurately representing the original assembly logic: (input | 0xBAAAD0BF) $*$ (input ˆ 2). Similarly, in case 1, the actual computation (input $\&$ 0xBAAAD0BF) $*$ (input $\ + \ 3 \ )$ ) was erroneously simplified to input - 1, omitting essential arithmetic operations. Case 2 further demonstrated inaccuracies, as Grok 3 incorrectly utilized a bitwise negation and introduced an unrelated constant 0xAA61A945, diverging from the intended computation (input ˆ 0xBAAAD0BF) $*$ (input | 4). Finally, case 3 was significantly oversimplified to input - 2, failing to include the necessary arithmetic addition and bitwise AND: (input $^ +$ 0xBAAAD0BF) $*$ (input & 5). Overall Grok 3 failed to accurately deobfuscate the assembly despite identifying the switch structure, misrepresenting key arithmetic and logical operations. # 3.4.4 DeepSeek R1 DeepSeek R1’s deobfuscation attempts on the provided assembly code revealed significant limitations in handling complex obfuscation techniques. The initial misconception is similar to other models analyzed, DeepSeek R1 incorrectly interpreted the assembly as implementing a state machine with continuous execution as shown in Listing 32: # Listing 32: DeepSeek R1 initial interpretation This interpretation in Listing 32 fundamentally misunderstood the code’s structure, which actually implemented a single-pass switch statement determined by input & 3 at address $0 \mathrm { x } 1 5 6 \mathrm { b }$ , not a continuous-execution state machine. Despite additional guidance, DeepSeek R1 failed to accurately identify the critical computation blocks that follow address $0 \mathrm { x } 1 5 4 6$ . When directed to focus on the switch structure based on edi & 3, the model produced the code shown in Listing 33: Perhaps most concerning was DeepSeek R1’s introduction of fictional hexadecimal values nowhere present in the original assembly like: 0xBAADF00D, 0xDEADBEEF, 0xCAFEBABE and others, as seen in Listing 33. These are recognizable “hexspeak” values (hexadecimal values that resemble English words) commonly used as placeholders in debugging or illustrative examples, but were entirely fabricated by the model rather than derived from the assembly code. When challenged about the origin of these values, DeepSeek R1 acknowledged they were not present in the actual assembly but provided no meaningful correction to its analysis. DeepSeek R1 failed to deobfuscate the assembly, misinterpreting the structure and fabricating constants not present in the code. # 3.4.5 Claude 3.7 Sonnet Claude attempted to deobfuscate the provided assembly code across multiple iterations. The model’s approach evolved through several distinct phases, each demonstrating specific limitations in handling complex obfuscated code. In its initial analysis, Claude characterized the assembly as implementing a state machine with the structure shown in Listing 34: 1 int function (int input ) { 2 // Initial setup 3 bool isEven $\mathbf { \sigma } = \mathbf { \sigma }$ ( input % $\begin{array} { l l l } { { 2 } } & { { = } } & { { 0 } ) } \end{array}$ ; 4 bool isLessThan10 $\mathbf { \sigma } = \mathbf { \sigma }$ ( input $\mathrm { ~ ~ { ~ \vert ~ < ~ } ~ } 1 0$ ) ; 5 int lastTwoBits $\mathbf { \tau } = \mathbf { \tau }$ input & 3; 6 int result $\mathit { \Theta } = \mathit { \Theta } 0$ ; 7 8 // Different computation paths based on input properties 9 if ( isEven && isLessThan10 ) { 10 result $\mathbf { \tau } = \mathbf { \tau }$ compute_value_1 (input , lastTwoBits ); 11 } else if (! isEven && isLessThan10 ) { 12 result $\mathbf { \Sigma } = \mathbf { \Sigma }$ compute_value_2 (input , lastTwoBits ); 13 } else if ( isEven && ! isLessThan10 ) { 14 result $\mathbf { \tau } = \mathbf { \tau }$ compute_value_3 (input , lastTwoBits ); 15 } else { 16 result $\mathbf { \tau } = \mathbf { \tau }$ compute_value_4 (input , lastTwoBits ); 17 } 18 19 return result ; 20 } When prompted to elaborate on each computation path, Claude expanded its analysis but represented these operations as simple lookup tables rather than preserving the arithmetic and bitwise logic present in the assembly. This intermediary analysis revealed a pattern-matching approach focused on detecting branching conditions rather than precise computation. When specifically directed to use a switch structure based on input & 3, Claude produced its final deobfuscation attempt shown in Listing 35: int function (int input ) { 2 bool isLessThan10 $\mathbf { \Sigma } = \mathbf { \Sigma }$ ( input < 10) ; 3 int lastTwoBits $\mathbf { \tau } = \mathbf { \tau }$ input & 3; 4 switch ( lastTwoBits ) { 5 case 0: // Even number , divisible by 4 6 return isLessThan10 ? 4 : 0; 7 case 1: // Odd number , one more than multiple of 4 8 return isLessThan10 ? 3 : 2; 9 case 2: // Even number , 2 more than multiple of 4 10 return isLessThan10 ? 4 : 1; 11 case 3: // Odd number , 3 more than multiple of 4 12 return isLessThan10 ? 5 : 3; 13 } 14 15 return -1; // This should never happen 16 } This implementation in Listing 35 fails to capture critical aspects of the original assembly: the constant 0xBAAAD0BF that appears at addresses 0x598, 0xa18, and elsewhere is entirely absent from Claude’s representation, despite its central role in the bitwise operations. The original code at address $0 \mathrm { x } 1 5 8 5$ contains operations like (input & 0xBAAAD0BF) | (input ˆ 0xBAAAD0BF), which are completely absent from Claude’s deobfuscation. The assembly clearly shows multiplication operations (e.g., at 0x1392-0x13b1) that form essential components of the computation, yet these are not represented in Claude’s output. Claude represented the function as returning fixed integer values (0-5) based solely on the input’s properties, rather than capturing the dynamic computations performed by the assembly code. Additionally, Claude’s representation eliminated all intermediate calculations and variable manipulations present in the original assembly, significantly altering the function’s actual behavior. Claude 3.7 Sonnet failed to accurately deobfuscate the assembly, missing key arithmetic and bitwise operations despite identifying the switch structure. # 4 Discussion Our systematic evaluation of several state-of-the-art LLMs across multiple obfuscation techniques reveals significant performance variations, from autonomous deobfuscation to complete failure, depending on the technique employed. Rather than attributing these differences to singular factors like model size or architecture, we propose a theoretical framework based on four distinct yet interconnected dimensions: Reasoning Depth (analytical processing of program logic), Pattern Recognition (structural identification amid obfuscation), Noise Filtering (distinguishing essential logic from deceptive constructs), and Context Integration (maintaining coherence across fragmented code). This framework provides substantial explanatory power for our results and establishes predictive value for future advances. The following subsections explore each dimension through established program analysis theory, analyze cybersecurity implications, address study limitations, and identify promising research directions for enhancing AI-assisted reverse engineering. # 4.1 Theoretical Framework for LLM Deobfuscation The application of large language models to assembly-level deobfuscation represents a novel intersection of natural language processing and program analysis, requiring a structured approach to understand their capabilities and limitations. Our four-dimensional framework, derived from pattern analysis across multiple obfuscation techniques and models,provides a systematic foundation for characterizing how LLMs process obfuscated code and why specific techniques present greater analytical challenges than others. Context Integration Dimension Context Integration refers to a model’s ability to maintain coherence across logically related but physically disconnected code segments. This dimension parallels control flow graph analysis [54] and leverages transformer architecture’s self-attention mechanisms [4], which capture long-range dependencies across token sequences. Our experiments revealed varied capabilities: GPT-4.5, GPT-Pro-o1, and Grok 3 demonstrated superior integration, successfully reconstructing relationships between computations at address $0 \mathbf { x } 1 5 8 5$ with condition input $\&$ ${ \mathrm { ~ 3 ~ } } = = { \mathrm { ~ 0 ~ } }$ (Section 3.3), despite fragmentation through state machines initialized with constant $0 \mathbf { x } 6 4 \mathbf { f } \mathsf { d } 8 \mathsf { a } 9 6$ . DeepSeek R1 and GPT-3o Mini produced state machine representations with transitions like state $\mathbf { \tau } = \mathbf { \tau }$ $= \phantom { - } 0 . 8 4 0 3 9 4 5 { \tt a } 8$ to state $\mathbf { \Sigma } = \mathbf { \Sigma }$ 0xd87b0953 (Section 3.4.4 and Section 3.1.4) without simplifying to underlying algorithms. When facing combined obfuscation, all models struggled, with even GPT-4.5 misinterpreting intermediate state values at $0 \mathtt { x } 1 5 7 9$ as definitive outputs (Section 3.4.1). We identified a clear relationship between context window size and performance, models with windows exceeding 200,000 tokens demonstrated superior performance on complex tasks. Reasoning Depth Dimension Reasoning Depth reflects a model’s capacity to perform logical inference on program properties similar to formal verification and abstract interpretation [46], but through data-driven approaches rather than explicit logical engines. This capability is based on chain-of-thought reasoning [39], allowing models to decompose complex problems into manageable steps. Our analysis revealed a clear performance spectrum: Claude 3.7 Sonnet and GPT-4.5 demonstrated advanced reasoning, correctly determining that expressions like $\mathit { \check { ( \mathrm { v a r 1 ~ - ~ 1 ) ~ * ~ } } } \mathtt { v a r 1 } )$ & 1 $\scriptstyle = = \ 0$ always evaluate to true (Section 3.1) — effectively recognizing mathematical invariants without guidance. GPTPro-o1 and Grok 3 showed moderate capabilities (Section 3.1.3 and Section 3.1.5), requiring occasional hints, while GPT-4o, DeepSeek R1, and GPT-3o Mini exhibited limited reasoning (Section 3.1.1, Section 3.1.6, and Section 3.1.4), often treating invariant conditions as variable and missing unreachable code paths. This gradient underscores reasoning depth as a critical capability bridging general language processing and domain-specific program analysis. Pattern Recognition Dimension Pattern Recognition involves identifying structural and computational patterns within obfuscated code, building upon program similarity metrics and clone detection theory [50, 51]. This dimension connects to the "naturalness of software" hypothesis [52], which proposed that programming languages contain statistical regularities enabling probabilistic modeling. Our experiments showed that models like GPT-4.5, GPT-Pro-o1, and Grok 3 successfully reconstructed the underlying switch structures despite extensive obfuscation (Section 3.3), identifying complex arithmetic transformations such as (input | 0xbaaad0bf) $*$ (input ˆ 2) within heavily obfuscated contexts (Section 3.2). This capability emerged organically from general language modeling rather than specialized program analysis tools. Performance varied significantly between obfuscation techniques - models that excelled at reconstructing control structures often struggled with arithmetic transformations, supporting the findings that statistical models of code can capture both syntactic and semantic patterns [53], with varying degrees of success. Noise Filtering Dimension Noise Filtering encompasses a model’s ability to distinguish essential computational logic from obfuscation artifacts, sharing functional similarities with program slicing [48] despite fundamentally different mechanisms. From an information-theoretic perspective [47], LLMs must extract meaningful signal from the increased entropy that obfuscation introduces. Our experiments demonstrated this capability in specific contexts: when analyzing bogus control flow, models like GPT-4.5, GPT-Pro-o1, and Claude 3.7 Sonnet correctly identified repetitive predicate checks as non-functional (Section 3.1.2, Section 3.1.3, Section 3.1), simplifying the code to its essential form. However, we observed significant limitations: DeepSeek R1 generated convoluted expressions like ((0xBAAAD0BF | n) (0xE6C98769 & $\tilde { \mathbf { \Gamma } } _ { \mathbf { n } } ^ { \mathbf { \Gamma } }$ ) ˆ 2) instead of the simpler (n | 0xBAAAD0BF) $*$ $\mathbf { \Omega } ( \mathbf { n } \ \cdot \ \mathbf { \Omega } _ { 2 } )$ (Section 3.2.4). This parallels challenges in adversarial machine learning [49], where obfuscation disrupts the statistical patterns on which LLMs rely, creating noise that confounds even models with strong capabilities in other dimensions. # 4.2 Mapping Framework Dimensions to Empirical Results Our four-dimensional framework provides explanatory power for the performance variations observed in Table 2. Each obfuscation technique challenges specific dimensional capabilities, creating predictable performance patterns across models. Bogus Control Flow and Reasoning Depth Performance against bogus control flow (BCF) appears to be associated with reasoning capabilities. Models with lower intervention requirements (Level 0-2) demonstrated superior reasoning depth by correctly analyzing opaque predicates like $ { \mathit { \iota } } ( \mathsf { v a r } 1 \ * \ ( \mathsf { v a r } 1 \ - \ 1 ) ) { \mathit { \iota } } \& \ 1 \ = \ 0 ( \mathsf { S e c t i o n } \ 3 . 1 )$ . In contrast, models requiring expert intervention (Levels 4-5) consistently failed to recognize mathematical invariants, treating potentially-always-true conditions as variable. This dimension clearly separates models into performance tiers: high performers (Claude 3.7 Sonnet, GPT-4.5, Grok3), moderate performers (GPT-Pro-o1), and low performers (GPT-4o, GPT-3o Mini, DeepSeekR1). Instruction Substitution and Pattern Recognition Results for instruction substitution (IS) reveal widespread deficiencies in pattern recognition (Section 3.2). The consistently high intervention requirements demonstrate how obfuscated arithmetic operations challenge even advanced models’ pattern recognition capabilities. We identified various approaches: some models attempted to preserve obfuscation constants (Section 3.2.2, Section 3.2.4) while others sought mathematical simplification (Section 3.2.5). Neither approach consistently succeeded, with most models requiring significant intervention (Levels 2-5), highlighting pattern recognition as a critical bottleneck in current deobfuscation capabilities. Control Flow Flattening and Context Integration Performance against control flow flattening (CFF) appears to reflect context integration capabilities. Several models (GPT-4.5, GPT-Pro-o1, Grok3) achieved level 0 (Section 3.3), successfully maintaining coherence across fragmented code blocks and reconstructing relationships between computations at address $\mathtt { 0 x 1 5 8 5 }$ and control conditions at $0 \mathtt { x } 1 5 6 \mathtt { b }$ (Section 3.3.2). Most models performed relatively well on this task (Levels 0-2), suggesting that context integration is a comparative strength in current LLM architectures, potentially linked to attention mechanisms optimized for capturing long-range dependencies. Combined Techniques and Dimensional Interdependence The universal failure against combined techniques (All: Level 5 across all models tested in Section 3.4) demonstrates how obfuscation specifically targets the interdependence of these four dimensions. When multiple techniques simultaneously challenge reasoning depth, pattern recognition, noise filtering, and context integration, even models that perform well in individual dimensions falter. This suggests that dimensional capabilities operate synergistically rather than independently: A model strong in three dimensions but weak in one may still completely fail when all dimensions are challenged simultaneously. Dimensional Performance Anomalies Our analysis revealed important dimensional performance patterns that open new valuable research directions. For instance, Claude 3.7 Sonnet demonstrated exceptional Reasoning Depth with bogus control flow (Level 0) (Section 3.1) and showed similar Context Integration with control flow flattening (Level 0-1) (Section 3.3.4), despite both techniques requiring sophisticated reasoning capabilities. Similarly, models with large context windows occasionally performed worse than smaller-context models on tasks that seemingly should benefit from extended context. These patterns demonstrate significant interactions between dimensional capabilities and specific implementation details of obfuscation techniques that our current framework does not fully capture. Future research should specifically investigate these boundary cases to refine our understanding of how dimensional capabilities interact and potentially identify additional factors that influence deobfuscation performance. Dimensional Prioritization by Obfuscation Type Our results suggest that each obfuscation technique challenges a different primary dimension: • Bogus Control Flow primarily challenges Reasoning Depth • Instruction Substitution primarily challenges Pattern Recognition • Control Flow Flattening primarily challenges Context Integration • Combined Techniques challenge all dimensions simultaneously This dimensional mapping explains the inconsistent performance patterns observed across techniques: models excel where their dimensional strengths align with technique demands, while struggling where techniques target their dimensional weaknesses. Future architectural improvements should focus on balanced enhancement across all four dimensions rather than optimizing for any single capability. # 4.2.1 Taxonomy of LLM Deobfuscation Errors Analysis of deobfuscation attempts across our evaluation reveals distinct error patterns that occurred consistently across multiple models. The following taxonomy classifies these errors, with cross-references to examples already detailed in our Results section: Predicate Misinterpretation Errors These errors involve failing to recognize that certain conditions are always true or false (invariant), relating directly to limitations in the Reasoning Depth dimension of our framework. When analyzing bogus control flow, GPT-4o misinterpreted the opaque predicate $( \mathrm { v a r 1 ~ * ~ ( v a r 1 ~ - ~ 1 ) }$ ) & $\scriptstyle 1 \ = = \ 0$ as potentially false (Section 3.1.1), despite the mathematical property that the product of consecutive integers is always even. This misinterpretation led to incorrect control flow analysis, with the model treating the unreachable jump to address $0 \mathbf { x } 4 0 1 6 9 \mathsf { e }$ as a viable execution path. Similarly to the challenges faced in formal verification [46], these errors demonstrate the difficulty in symbolically reasoning about invariant properties. This error was prevalent in the analysis of bogus control flow obfuscation, appearing in 3 of 8 models (GPT-4o, GPT-3o Mini, DeepSeek R1 in Section 3.1). In contrast, models with enhanced reasoning capabilities, such as Claude 3.7 Sonnet, GPT-4.5, and Grok 3 (in Section 3.1), correctly identified the predicate’s invariant property. Beyond its frequency, this error has practical consequences, such as misidentification of potentially exploitable code paths in vulnerability analysis. Structural Mapping Errors These errors occur when a model correctly identifies computational components but incorrectly maps them to control structures, reflecting limitations in both the Pattern Recognition and Context Integration dimensions of our framework. When analyzing CFF - Grok 2 demonstrated this error type, correctly identifying four arithmetic transformations but assigning three to incorrect conditional paths. Specifically, it matched (input $^ +$ 0xbaaad0bf) $*$ (input & 5) to input $\& \ 3 \ = \ 0$ instead of input $\ \& \ 3 \ = \ 3$ (Section 3.3.1). This creates deobfuscated code that preserves computational elements but misimplements their control dependencies, meaning that the conditions governing their execution are wrong, resulting in runtime behavior fundamentally different from the original despite structural similarity. This type of error presents a particularly challenging issue for code analysis applications, where incorrect mappings may pass superficial validation, yet introduce subtle semantic errors that fundamentally alter program behavior[53]. This type of error was observed during the deobfuscation process, with varying outcomes depending on model capabilities. While some models like Grok 2 initially exhibited structural mapping errors but recovered with guidance (Section 3.3.1), models with stronger context integration capabilities (e.g., GPT-4.5 and Claude 3.7 Sonnet) correctly reconstructed these relationships from the outset, achieving autonomous deobfuscation (Level 0-1). This demonstrates that structural mapping represents a recoverable error for capable models rather than a fundamental limitation. Control Flow Structure Misinterpretation Errors This error type, primarily reflecting limitations in our framework’s Noise Filtering and Pattern Recognition dimensions, occurs when models incorrectly reconstruct the fundamental control flow structure of the original code. A common manifestation we observed was the incorrect inference of iterative structures (loops, state machines) where only single-pass logic exists. When analyzing bogus control flow obfuscation, DeepSeek R1 and GPT-3o Mini misinterpreted repetitive predicate checks (deliberately inserted as obfuscation artifacts) as indicators of functional looping behavior. For example, DeepSeek R1 generated for_n range(10) constructs in its deobfuscated output, despite the original assembly implementing a non-iterative execution path (Section 3.1.6). Similarly, when facing combined obfuscation techniques, ChatGPT-pro-o1 and Grok 3 reconstructed the code as a looping state machine (e.g., while (1) switch (state) . (Section 3.4.2 and Section 3.4.3)) rather than identifying the single-pass switch statement at address $0 \mathrm { x } 1 5 6 \mathrm { b }$ . While this error resembles hallucination phenomena in other LLM applications [55], our evidence demonstrates it primarily arises from misclassifying obfuscation patterns. These patterns, such as control flow flattening, transform direct branches into state-machine-like constructs to mislead static analysis tools. This misinterpretation carries significant security implications, as it can fundamentally alter an analyst’s understanding of program execution, potentially obscuring vulnerabilities or introducing nonexistent execution paths. These consequences directly impact vulnerability assessment procedures, where accurate control flow reconstruction is essential to identify potentially exploitable paths [56, 57], particularly in contexts where automated tools supplement human analysis. We observed this error in 3 out of 8 models (DeepSeek R1, GPT-3o Mini, ChatGPTpro-o1), highlighting a widespread difficulty in separating obfuscation from actual control flow. Arithmetic Transformation Errors These errors involve failing to correctly reconstruct precise arithmetic and bitwise operations from obfuscated assembly, directly reflecting limitations in the Pattern Recognition dimension of our framework. All five models that completed instruction substitution evaluation exhibited this error type, though with varying severity. Our analysis documented systematic pattern failures: ChatGPT-4o (Section 3.2.1) replaced the operation (input | 0xBAAAD0BF) $*$ (input ˆ 2) with entirely incorrect expressions containing unrelated constants and inappropriate operations. GPT-4.5 demonstrated a different failure mode (Section 3.2.2), preserving core operations but introducing unnecessary complexity by adding extraneous bitwise operations and constants to what should be simple expressions. For example, instead of the straightforward operation present in the original code, it generated (( input ˆ 0xBAAAD0BF ˆ 0xE6C98769) + 2) $*$ (input | 2) . DeepSeek R1 (Section 3.2.4) produced particularly convoluted transformations with multiple nested operations and unrelated constants. Models adopted different strategies when handling obfuscated arithmetic operations. Claude 3.7 Sonnet (Section 3.2.5) attempted mathematical abstraction, deriving patterns like $( - 3 ) * ( \mathbf { x } \ { \hat { \ } } 2 )$ for even-indexed cases while ignoring specific constants. GPT-4.5 preserved core operations but introduced unnecessary complexity. Grok 3(Section 3.2.3) achieved partial accuracy, correctly reconstructing some operations while misrepresenting others. DeepSeek R1 produced the most convoluted transformations with multiple nested operations and fabricated constants. These diverse approaches, each with distinct trade-offs between abstraction and fidelity, demonstrate the fundamental challenge LLMs face when attempting to simplify deliberately obfuscated arithmetic. The consistency of these errors across all models when faced with instruction substitution, a technique specifically designed to obscure individual operations, suggests a fundamental limitation in the ability of current LLMs to recognize equivalent computational expressions, particularly when obfuscation deliberately transforms simple operations into more complex forms through mathematical identities (all examples from Section 3.2). Constant Propagation Errors These errors involve incorrectly handling, identifying, or fabricating literal values in deobfuscated code, revealing limitations in both Pattern Recognition and Context Integration dimensions. We identified three distinct manifestations with varying severity. First, subtle transcription errors occurred when models processed large hexadecimal constants, exemplified by Claude 3.7 Sonnet’s (Section 3.2.5) misreading of 0xBAAAD0BF as 0xBAAAA D0BF (inserting an extra ’A’) during instruction substitution analysis. Second, models introduced mathematically unrelated constants, as seen when ChatGPT-4.5 (Section 3.2.2) generated expressions containing values like 0xE6C98769 and 0x8ABD1CD5 — constants not present in the original assembly but erroneously incorporated into arithmetic operations. Most concerning was the third category: complete fabrication of well-known "hexspeak" values. When analyzing combined obfuscation techniques, DeepSeek R1 generated deobfuscated code containing entirely fictional constants (Section 3.4.4) like 0xBAADF00D, 0xDEADBEEF, and 0xCAFEBABE—recognizable placeholder values commonly used by developers but completely absent from the original assembly. This fabrication suggests a problematic pattern-matching behavior where models revert to familiar training examples rather than faithfully representing the analyzed code. From a security perspective, these errors are particularly troubling when analyzing cryptographic implementations or any code where precise constant values directly impact functionality and correctness, such as protocol implementations, hash functions, or file format parsers. We identified constant propagation errors in 3 of 8 models (Claude 3.7 Sonnet, DeepSeek R1, GPT-4.5 in (Section 3.2)), indicating that even advanced LLMs struggle with accurate representation of symbolic information. # 4.2.2 Obfuscation Technique Resistance Model Based on our empirical findings, we propose a three-tier resistance model that categorizes obfuscation techniques according to their effectiveness against LLM-based deobfuscation: Low Resistance Techniques Control flow flattening primarily challenges the Context Integration dimension by fragmenting logically connected code segments across a state machine structure. Our experiments revealed excellent performance across most models: several top performers (GPT-4.5, GPT-Pro-o1, Grok 3) achieved autonomous deobfuscation (Level 0), while most others required minimal guidance (Levels 0-2). Models with larger context windows and advanced attention mechanisms performed particularly well (Section 3.3), suggesting that this obfuscation technique, while effective against traditional static analysis tools, offers limited protection against LLMs with strong context integration capabilities. Moderate Resistance Techniques Bogus control flow obfuscation primarily challenges the Reasoning Depth dimension through opaque predicates and unreachable code paths. This technique demonstrated intermediate resistance, with clear performance stratification across models: only one model achieved autonomous deobfuscation (Claude 3.7 Sonnet at Level 0), while two others required minimal guidance (Grok 3 at Level 1, GPT-4.5 at Levels 1-2). However, the remaining models needed substantial expert intervention or failed entirely (GPT-Pro-o1 at Level 3, GPT-4o at Level 4, GPT-3o Mini and DeepSeekR1 at Levels 4-5). This stratification directly correlates with each model’s ability to perform mathematical reasoning about invariant conditions (Section 3.1). The mixed performance demonstrates that while advanced models with enhanced reasoning capabilities can overcome this obfuscation, it still presents significant challenges for models with weaker reasoning depth, making it moderately effective against current LLM-based analysis. High Resistance Techniques Instruction substitution and combined obfuscation techniques demonstrated the highest resistance, with all models either requiring expert intervention (Level 5) or failing completely. Instruction substitution specifically challenges Pattern Recognition by replacing simple operations with complex mathematical equivalents. Combined techniques simultaneously attack multiple capability dimensions, creating a compounding effect that overwhelmed even the most capable models. Every model exhibited multiple error types from our taxonomy when facing combined obfuscation(Section 3.4). The universal failure against these techniques demonstrates a current upper bound on LLM deobfuscation capabilities, regardless of model size or architecture. # 4.3 Limitations Our systematic approach incorporates deliberate scoping decisions that define clear boundaries for this research: Architectural Scope Our evaluation focused exclusively on $\mathbf { x } 8 6 \_ 6 4$ assembly, which may limit the generalizability of our dimensional framework to other instruction set architectures. While $\times 8 6 \_ 6 4$ represents a dominant platform for desktop and server environments, embedded systems and mobile devices increasingly employ ARM, RISC-V, and other architectures that may present different deobfuscation challenges. Sample Diversity Our evaluation strategically focused on a well-documented program obfuscated with OLLVM, limiting generalizability. While this controlled approach enabled detailed analysis across models, real-world malware employs diverse obfuscation tools and techniques. Future work should expand testing across multiple codebases and obfuscation frameworks to validate whether our dimensional framework applies consistently across varied obfuscation implementations. Model Selection Our study focused on eight commercial LLMs representing current state-of-the-art capabilities but cannot capture the full landscape. Notably absent are domain-specific code models and open-source alternatives that might exhibit different capabilities. This limitation reflects the practical constraints of comprehensive evaluation rather than intentional exclusion. Future research should evaluate specialized code-oriented models that may demonstrate different dimensional strengths. Qualitative Assessment Our attacker knowledge level framework, while providing structured evaluation, inherently involves subjective judgment. To mitigate this limitation, we conducted multiple deobfuscation attempts per scenario and documented detailed reasoning for each assigned level. However, the framework’s reproducibility remains a challenge that future work should address through developing more objective, quantitative metrics for deobfuscation success. Temporal Boundaries This work establishes a methodological foundation that will remain relevant despite the rapid evolution of LLM capabilities, with significant advances occurring on timescales of months. Our evaluation captures capabilities at a specific point (March 2025), establishing a baseline against which future progress can be measured. By focusing on dimensional capabilities rather than specific implementations, our framework provides enduring value for evaluating emerging models and techniques. # 4.4 Future Research Directions Our dimensional framework and empirical findings reveal several high-impact research opportunities that could advance both LLM-based deobfuscation capabilities and security defenses: Dimensional Transfer Learning Our results demonstrate uneven development across capability dimensions, with models exhibiting strength in specific dimensions (e.g., Context Integration) while struggling in others (e.g., Pattern Recognition). Future research should investigate whether techniques that enhance performance in one dimension could transfer to others. For example, research could explore whether pre-training approaches that improve reasoning capabilities for mathematical invariants (addressing Predicate Misinterpretation Errors) might also enhance pattern recognition for arithmetic transformations. Such dimensional transfer could provide a more efficient pathway to developing balanced capabilities across all four dimensions than treating each as an isolated challenge. Adversarial Obfuscation Framework The error taxonomy we’ve documented provides a foundation for developing next-generation obfuscation techniques specifically designed to resist LLM-based analysis. Future research should systematically explore techniques that exploit identified weaknesses across multiple dimensions simultaneously. Particularly promising are techniques that: • Embed mathematically complex opaque predicates that appear variable but are invariant (targeting Reasoning Depth) • Employ instruction substitution with mathematical identities that preserve human readability while maximizing LLM confusion (targeting Pattern Recognition) • Introduce structurally ambiguous constructs that suggest iterative execution but implement single-pass logic (targeting Noise Filtering) • Fragment related code blocks across non-obvious execution paths (targeting Context Integration) These techniques could be implemented as extensions to existing obfuscation frameworks like OLLVM and systematically evaluated against both current and future LLM architectures. Uncertainty-Aware Deobfuscation Systems Our finding that all models struggled with precise arithmetic reconstruction suggests the need for systems that explicitly quantify uncertainty in their deobfuscation outputs. Future work should develop approaches where models provide confidence scores for different aspects of deobfuscated code, highlighting areas that may require human verification. This would transform deobfuscation from a binary success/failure paradigm to a collaborative human-AI workflow where resources are efficiently allocated to the most uncertain aspects of analysis. Such systems could potentially incorporate multiple specialized models, each focused on different dimensional capabilities, with a meta-model integrating their outputs while tracking uncertainty across the deobfuscation process. Cross-Architecture Generalization Our evaluation focused on $\phantom { 0 } { \times 8 6 } \_ { 6 4 }$ assembly, but real-world reverse engineering requires analyzing diverse architectures (ARM/AArch64, $\mathbf { x } 8 6 - 3 2$ [58], and emerging ISAs like, for example, RISC-V [59]). Future research should investigate whether the dimensional capabilities we have identified generalize across instruction set architectures, particularly exploring whether models trained primarily on one architecture can transfer capabilities to others. This research direction is particularly important given the increasing diversity of embedded systems and IoT devices that may employ obfuscation techniques. Methodologically, this would require developing standardized datasets of equivalent programs compiled and obfuscated across multiple architectures, enabling a direct comparison of deobfuscation performance. Quantitative Benchmark Development To enable rigorous tracking of progress in this field, future work should develop standardized quantitative benchmarks that objectively measure each dimension of our framework. Such benchmarks could include: • Reasoning depth metrics based on correct evaluation of increasingly complex opaque predicates • Pattern recognition metrics measuring accuracy in reconstructing obfuscated arithmetic operations • Noise filtering metrics quantifying ability to distinguish functional from non-functional code • Context integration metrics assessing accuracy in reconstructing control flow relationships These benchmarks should include both synthetic examples designed to isolate specific capabilities and real-world obfuscated code samples to ensure practical relevance. By establishing such metrics, the field can move beyond qualitative assessments to measurable progress in LLM-based deobfuscation capabilities.
Large language models (LLMs) have shown promise in software engineering, yet their effectiveness for binary analysis remains unexplored. We present the first comprehensive evaluation of commercial LLMs for assembly code deobfuscation. Testing seven state-of-the-art models against four obfuscation scenarios (bogus control flow, instruction substitution, control flow flattening, and their combination), we found striking performance variations--from autonomous deobfuscation to complete failure. We propose a theoretical framework based on four dimensions: Reasoning Depth, Pattern Recognition, Noise Filtering, and Context Integration, explaining these variations. Our analysis identifies five error patterns: predicate misinterpretation, structural mapping errors, control flow misinterpretation, arithmetic transformation errors, and constant propagation errors, revealing fundamental limitations in LLM code processing.We establish a three-tier resistance model: bogus control flow (low resistance), control flow flattening (moderate resistance), and instruction substitution/combined techniques (high resistance). Universal failure against combined techniques demonstrates that sophisticated obfuscation remains effective against advanced LLMs. Our findings suggest a human-AI collaboration paradigm where LLMs reduce expertise barriers for certain reverse engineering tasks while requiring human guidance for complex deobfuscation. This work provides a foundation for evaluating emerging capabilities and developing resistant obfuscation techniques.x deobfuscation. This work provides a foundation for evaluating emerging capabilities and developing resistant obfuscation techniques.
[ "cs.SE", "cs.AI", "cs.CR" ]
# 1 INTRODUCTION The emergence of reasoning capabilities in Large Language Models (LLMs) has marked a major leap forward, particularly in tasks involving mathematics and programming (Guo et al., 2025; Jaech et al., 2024; Zeng et al., 2024; Yang et al., 2025; Kavukcuoglu, Koray, 2025). To enable such reasoning, LLMs are trained using Reinforcement Learning with Verifiable Rewards (RLVR) technique guided by verifiable rewards computed from the model’s own final outcomes (Schulman et al., 2017; Guo et al., 2025; Liu et al., 2025a; Yu et al., 2025). These rewards are derived from objective signals such as matching reference answers in math problems, passing unit tests in coding challenges, or selecting the correct option in multiple-choice questions (MCQ). Compared to traditional approaches like reward model training, verifiable rewards have proven effective in mitigating reward hacking and are relatively straightforward to implement (Guo et al., 2025; Shao et al., 2024; Yue et al., 2025; Liu et al., 2025a). Building on the success of reasoning capabilities in math, programming, and MCQ tasks with RLVR, there is growing interest in extending these techniques to open-ended tasks that require logical analysis, for example, revising a document in response to comments, composing analytical summaries or reports, or reviewing financial documents. The primary challenge lies in designing a generic, verifiable reward signal akin to those used in math and coding tasks (Zuo et al., 2025; Zhao et al., 2025b; Su et al., 2025; Chen et al., 2025c). Given the limitations and potential inefficacy of training a separate reward model (Guo et al., 2025; Shao et al., 2024; Zuo et al., 2025; Zhao et al., 2025b), the Prompt: Reference Outcome: This paper is organized as follows. Section 2 gives an overview of existing work on graph pooling. - Paper context Section 3 details the components and computational flow for HaarPooling. Section 34 provides the mathematical details on - Reviewer comments HaarPooling, including the compressive Haar basis, compressive Haar transforms, and efficient implementations. Section 4 - Paragraph to revise gives an overview of existing work on graph pooling. Section 5 reports ... Better Reasoning Advantage: Advantage: Reasoning A: <think>...Therefore, the revised paragraph should be modified to have Section 4 before Sections 2 and 3. So, the correct order is...Section 0.173 0.503 4 gives...Section 2 details...Section 3 provides...</think> $\pi _ { \boldsymbol { \theta } }$ ❌ V ✅ the related work is properly mentioned...</think> Reasoning B: <think>...Maybe the answer is to leave it as is but make sure 0.416 0.427 Worse Reasoning Aggregated R3 Certainty Reference Outcome Prompt, Reasoning Reasoning-Reflective Tokens ? This paper is organized as follows. Section gives an overview of existing work on graph pooling. Section 1 Certainty-A Certainty-B 1o(Certainty) LLM-as-a-judge (Zheng et al., 2023; Lee et al., 2023) may seem to be an alternative. However, relying on an external LLM to evaluate the outcomes of an actor LLM in RLVR introduces sensitivities to factors such as prompt design and optimization, model selection, the generator–discriminator gap, and reward hacking (Chen et al., 2025b;b; Huang et al., 2024; Zuo et al., 2025; Xu et al., 2025b; Sharma et al., 2024). Evaluating training model’s chain-of-thought (CoT) reasoning in semantic space adds an even greater challenge given how it hides reasoning in the latent space (Chen et al., 2025d). Meanwhile, traditional similarity-based metrics such as ROUGE scores or cosine similarity often fail to capture key logical aspects of open-ended outcomes and remain vulnerable to reward hacking (Christiano et al., 2017; Stiennon et al., 2020; Su et al., 2025). To address these challenges, we first introduce a new token-level dense reward called the Reasoning Reflection Reward (R3). Owing to the autoregressive nature of LLMs, the CoT reasoning serves as a latent prefix that conditions the model’s generation of the final outcome. Consequently, the LLM’s token-level certainty of the reference outcome – measured under this reasoning prefix – effectively captures how likely the generated reasoning is to produce the correct outcome. However, in longform generation, only a limited subset of tokens in the reference intrinsically reflect variations in reasoning paths, while many others are less informative and may dilute the reward signal. To overcome this, R3 selectively identifies and emphasizes the key tokens in the reference that are most sensitive to variations in reasoning, shaping the reward signal to focus on these reasoning-reflective tokens (Fig. 1). This approach enables the model to directly optimize its reasoning paths toward achieving the reference outcomes in open-ended tasks, promoting outcome-driven reasoning in a manner analogous to RLVR. We then propose Direct Reasoning Optimization (DRO), an RL-based fine-tuning framework that leverages R3 as its core reward signal. To compute R3, DRO directly uses a dynamic reward policy derived from the same reference policy (LLM) being optimized – thereby eliminating the need for any external reward model or signal. Our method builds upon the widely adopted RLVR framework, Group Relative Policy Optimization (GRPO) (Guo et al., 2025; Shao et al., 2024), extending its outcome-driven effectiveness to open-ended reasoning tasks. DRO further integrates a ubiquitous data filtering technique for open-ended reasoning tasks, motivated by the growing recognition of data selection’s importance in recent work (Muennighoff et al., 2025; Jiang et al., 2025; Ye et al., 2025; Yang et al., 2025). Our approach leverages R3 to dynamically filter training samples during RL training, without requiring any task-specific filtering heuristics or external frameworks. This filtering strategy improves downstream performance while simultaneously reducing training cost and time. Figure 2: Overview of Direct Reasoning Optimization (DRO), a framework that rewards and refines reasoning by directly leveraging feedback from the training model. DRO operates within the GRPO framework, where a group of CoT reasoning traces sampled from the actor policy $( \pi _ { \boldsymbol { \theta } } )$ are scored primarily using the R3 score along with length penalty on final outcome. The reward is computed via an internal policy $( \pi _ { \mathrm { r w d } } )$ , derived from the same base reference policy $( \pi _ { \mathrm { r e f } } )$ being optimized. DRO employs R3-based dynamic training data filtering for open-ended reasoning tasks to improve data efficiency and downstream task performance. Finally, we evaluate DRO on two distinct datasets—ParaRev(Jourdan et al., 2025) and FinQA(Chen et al., 2021)—using two Qwen reasoning models distilled from DeepSeek-R1. To the best of our knowledge, this is the first work to evaluate reasoning optimization on an open-ended task like paragraph revision (ParaRev), which involves relatively long-form textual outputs beyond the traditional math and programming domains. On ParaRev, DRO outperforms all baseline methods in terms of downstream task performance while achieving around $4 5 \%$ reduction in training cost. We further validate DRO on FinQA, a task with classic math-style answers, demonstrating that it achieves comparable performance to standard binary verifiable reward approaches—highlighting its versatility across both structured and open-ended tasks. # 2 RELATED WORK # 2.1 LLM REASONING Chain-of-Thought (CoT) reasoning has emerged as a critical driver of advanced reasoning in LLMs, improving accuracy across mathematical, commonsense, and logical tasks while increasing transparency in the decision-making process. Initial prompting-based methods demonstrated that LLMs could be guided to reason step-by-step without additional training, resulting in significant performance gains (Kojima et al., 2022; Huang & Chang, 2022; Zhang et al., 2022; Zelikman et al., 2022; Wei et al., 2022). Building on this foundation, recent approaches have incorporated CoT reasoning into the training loop—either through supervised fine-tuning on annotated reasoning traces (Zelikman et al., 2022) or via reinforcement learning with process- or outcome-based rewards (Shao et al., 2024; Lambert et al., 2024)—to further strengthen reasoning capabilities. By decomposing problems into intermediate steps, LLMs not only improve in accuracy but also become more interpretable and trustworthy, both of which are essential for real-world deployment (Lightman et al., 2023). # 2.2 REINFORCEMENT LEARNING WITH VERIFIABLE REWARDS Reinforcement Learning from Verifiable Rewards (RLVR) has emerged as a powerful framework for improving LLM performance in domains where success can be unambiguously defined and automatically evaluated (Lambert et al., 2024; Liu et al., 2025a; Su et al., 2025). In areas such as coding and mathematics, RLVR has enabled substantial advancements—models now solve complex problems and generate correct code with unprecedented accuracy and consistency (Shao et al., 2024; Yu et al., 2025; Muennighoff et al., 2025; Ye et al., 2025; Hu et al., 2025; Luo et al.; Liu & Zhang, 2025). This success stems from the integration of reinforcement learning with deterministic outcome verification, eliminating the need for learned reward models and facilitating large-scale training on diverse problem sets. However, extending RLVR to open-ended reasoning tasks remains a significant challenge. These tasks often involve diverse reasoning paths and multiple valid outcomes, making it difficult to define rule-based or verifiable rewards. As a result, designing reward signals that reliably reflect reasoning quality in such settings is still an open problem. # 2.3 REINFORCEMENT LEARNING WITHOUT EXTERNAL VERIFIER Over the past year, considerable efforts have been made to extend the success of RLVR to openended reasoning tasks. One line of work focuses on training general-purpose reward models to supervise reasoning optimization (Chen et al., 2025c; Liu et al., 2025b; Su et al., 2025), which introduces the overhead of developing and maintaining an additional reward model during RL training. A complementary line of research explores the use of internal model feedback, such as self-certainty—as a reward signal, thereby eliminating the need for external verifiers (Zhao et al., 2025b;a; Xu et al., 2025a; Zuo et al., 2025; Zhou et al., 2025; Chen et al., 2024; Tang et al., 2025). Among these, several concurrent studies (Zhao et al., 2025a; Xu et al., 2025a; Zhao et al., 2025b; Zuo et al., 2025) rely exclusively on intrinsic feedback to optimize reasoning traces without reference answers, while other concurrent studies (Tang et al., 2025; Zhou et al., 2025) incorporate reference outcomes to estimate the quality of generated reasoning. However, none of these approaches examine the token-level sensitivity of reasoning-quality rewards in the context of open-ended, longform generation, as we introduce in Section 4.1. Additionally, prior work does not address data filtering for reasoning training using task-independent, model-internal rewards – an approach we propose in Section 4.2.2 to improve data efficiency. Finally, to the best of our knowledge, we are the first to evaluate RL-based reasoning optimization on a long-form open-ended task such as paragraph revision in ParaRev (Section 5.1). # 3 BACKGROUND: REASONING OPTIMIZATION WITH RL Recent advances in LLM reasoning have largely been driven by reinforcement learning (RL)-based optimization techniques. To ground this process theoretically, we begin by framing RL-based reasoning optimization within the Markov Decision Process (MDP) framework. For LLMs, the MDP can be naturally defined at the token level, as the model generates one token at each time step $t$ . In this setup, the state $s _ { t }$ at time $t$ consists of the input prompt or question $\mathbf { q }$ followed by the sequence of output tokens generated so far $( \mathbf { 0 } _ { < t } )$ , i.e., $\mathbf { \boldsymbol { s } } _ { t } = \mathbf { \boldsymbol { q } } ; \mathbf { \boldsymbol { \mathbf { 0 } } } _ { < t }$ . The LLM, acting as the policy $\pi _ { \boldsymbol { \theta } }$ , takes an stochastic action by picking the next token $\left( o _ { t } \right)$ from its vocabulary based on the current state $s _ { t }$ . The state then transitions to $s _ { t + 1 } = s _ { t } ; [ o _ { t } ]$ . With RL-based reasoning optimization, the goal is to learn an optimal policy $\pi ^ { * }$ that generates a sequence of tokens conditioned on the question $\mathbf { q }$ in such a way that it leads to a desired final outcome, such as the correct answer to a math question. In order to optimize the policy $\pi _ { \boldsymbol { \theta } }$ , Shao et al. (2024) proposed Group Relative Policy Optimization (GRPO), a variant of Proximal Policy Optimization (PPO) Schulman et al. (2017). The surrogate objective in GRPO, which is maximized to learn the optimal policy, is defined as: $$ \begin{array} { r l } & \mathcal { I } _ { \mathrm { G R P O } } ( \boldsymbol { \theta } ) = \mathbb { E } _ { \mathbf { q } \sim P ( \boldsymbol { Q } ) , \left\{ \mathbf { o } _ { i } , \right\} _ { i = 1 } ^ { \boldsymbol { \mathbb { G } } } \sim \pi _ { \boldsymbol { \theta } \mathrm { d } } \left( \cdot | \mathbf { q } \right) } \\ & { \qquad \left[ \frac { 1 } { G } \displaystyle \sum _ { i = 1 } ^ { G } \frac { 1 } { | \mathbf { o } _ { i } | } \displaystyle \sum _ { t = 1 } ^ { | \mathbf { o } _ { i } | } \left\{ \operatorname* { m i n } \left[ \frac { \pi _ { \boldsymbol { \theta } } \left( \sigma _ { i } | \mathbf { q } , \mathbf { o } _ { i < t } \right) } { \pi _ { \boldsymbol { \theta } \mathrm { d } } \left( \sigma _ { i } | \mathbf { q } , \mathbf { o } _ { i < t } \right) } \hat { A } _ { i , t } , \mathrm { c l i p } \left( \frac { \pi _ { \boldsymbol { \theta } } \left( \sigma _ { i } | \mathbf { q } , \mathbf { o } _ { i < t } \right) } { \pi _ { \boldsymbol { \theta } \mathrm { d } , \boldsymbol { \hat { \boldsymbol { \theta } } } } \left( \sigma _ { i } | \mathbf { q } , \mathbf { o } _ { i < t } \right) } , 1 - \epsilon , 1 + \epsilon \right) \hat { A } _ { i , t } \right] \right\} \right. } \\ & { \qquad \left. ( 1 ) } \\ & { \qquad - \beta \mathbb { D } _ { \mathrm { K L } } \left( \pi _ { \boldsymbol { \theta } } \| \pi _ { \mathrm { r e f } } \right) \right] } \end{array} $$ where $\epsilon$ is the clipping parameter to maintain stability and $\pi _ { \theta _ { \mathrm { o l d } } }$ denotes the policy before the most recent update. The key distinction in GRPO lies in the computation of $i ^ { t h }$ token’s advantage estimate $\hat { A } _ { i , t }$ , which introduces a structured comparison across a group of generations from the same questions. Specifically, for a given prompt or question, suppose we sample a group of $G$ outputs $\left\{ { \bf o } _ { i } \right\} _ { i = 1 } ^ { G }$ from the actor model with corresponding rewards $\stackrel { - } { \{ r _ { i } \} } _ { i = 1 } ^ { G }$ . Then, for each token in the $i ^ { t h }$ output, the advantage is estimated as: $$ \hat { A } _ { i , t } = \frac { r _ { i } - m e a n ( \{ r _ { i } \} _ { i = 1 } ^ { G } ) } { s t d ( \{ r _ { i } \} _ { i = 1 } ^ { G } ) } $$ In the context of RLVR, $\boldsymbol { r } _ { i }$ is typically a verifiable reward computed on the final outcome – such as 1 if the final answer is correct and 0 otherwise. Note that each sampled output $\left( \mathbf { 0 } _ { i } \right)$ consists of CoT reasoning followed my final answer. This group-normalized formulation encourages the policy to assign higher probabilities to trajectories that outperform their peers, steering generation toward more promising reasoning paths. As a result, the model learns to sample tokens that are more likely to lead to correct or high-reward outcomes. Finally, GRPO includes a KL divergence regularization term, $\mathbb { D } _ { \mathrm { K L } }$ , to constrain the updated policy from deviating too much from the reference policy. This regularization is critical in preventing overfitting or reward exploitation – especially when a proxy reward model is used instead of reference. At the same time, a certain degree of exploration is necessary for the policy to meaningfully evolve beyond the reference policy. The hyperparameter $\beta$ controls this trade-off between exploration and exploitation. In the context of RLVR, where the reward is derived from matching reference answers (rather than a learned model), this risk is mitigated, and therefore recent state-of-the-art approaches often set $\beta = 0$ (Liu et al., 2025a; Guo et al., 2025). # 4 DRO FOR OPEN-ENDED TASKS The success of the RLVR technique, as outlined in the previous section, stems from its simple yet robust reward mechanism based on verifiable reference outcomes. This outcome-driven reward structure makes RL training more resilient to reward hacking (Silver et al., 2017). RLVR has proven particularly effective in domains such as mathematics, programming, and logical reasoning—where the correctness of a model’s output can be objectively verified and reliably translated into rewards (Shao et al., 2024; Su et al., 2025). However, extending RLVR to open-ended, especially long-form generation, tasks – such as text drafting and revision, composing analytical summaries or reports, or form completion—poses a significant challenge. In these scenarios, verifying logical correctness and translating it into a clean reward signal is inherently difficult, even when reference outcomes are available (Zhao et al., 2025b;a; Xu et al., $2 0 2 5 \mathrm { a }$ ; Zhou et al., 2025; Lu, 2025). Considering potential solutions, we observe that: Traditional similarity-based metrics fail to capture the essential features of open-ended reasoning outcomes. An intuitive approach involves measuring the similarity between the modelgenerated output and the reference text using surface-level metrics such as ROUGE, which rely on n-gram overlap. However, such metrics are ill-suited for evaluating logical coherence or reasoning consistency, as they emphasize lexical similarity rather than logical or structural alignment. Two responses that are logically equivalent but lexically distinct may receive a low ROUGE score, while a response that merely copies phrases from the ground truth – without preserving the underlying logic – may score highly. Embedding-based metrics such as cosine similarity offer a more flexible representation space, but they still struggle to reliably distinguish reasoning-valid outputs from superficially similar yet logically flawed ones. External dense reward models are infeasible for open-ended reasoning tasks. Leveraging a dedicated reward model to provide dense feedback typically requires preference-style datasets comprising paired examples of preferred outputs – a resource that is organically often unavailable for many open-ended tasks (Ethayarajh et al., 2024). Training such a reward model introduces additional computational and annotation costs, further limiting its practicality. More critically, reward models are susceptible to reward hacking, where models exploit weaknesses in the learned reward signal rather than genuinely improving reasoning quality (Silver et al., 2017; Shao et al., 2024). LLM-as-a-judge is not a turnkey or reliable solution for reasoning reward signal. Recently, LLMs have been increasingly adopted as automated evaluators in place of human judges (Gu et al., 2024). However, multiple studies have raised concerns about their reliability, highlighting issues such as sensitivity to prompt phrasing and evaluation rubrics, self-enhancement bias, prone to reward hacking, and the generator–discriminator gap (Sharma et al., 2024; Gu et al., 2024; Chen et al., 2025c; Liu et al., 2025b; Chen et al., 2025b). Moreover, extracting a dense, task-specific reward signal from LLM-as-a-judge remains particularly challenging (Liu et al., 2025b; Chen et al., 2025c). This challenge is further compounded when aiming for a scalable and turnkey fine-tuning framework across diverse tasks and datasets (Microsoft, 2024; Atreya, 2024; Xu et al., 2025b), as the LLM-asa-judge must be carefully tailored, validated, and maintained for each new use case (Liu et al., 2025b). # 4.1 REASONING REFLECTION REWARD (R3) Before addressing the challenges discussed above, it is important to understand how a reasoningcapable LLM generates outputs in response to a question or prompt. The output of such a model typically consists of two components: a CoT reasoning segment, followed by the final answer. Due to the autoregressive nature of LLMs, the CoT reasoning acts as a latent prefix that conditions the generation of the final answer (Chen et al., 2025d; 2024). In this formulation, the CoT reasoning can be viewed as an implicit intermediate state that guides the model’s final outcome generation. Specifically, the final answer is generated according to the conditional probability distribution $\pi ( \cdot \mid$ $\mathbf { q } , \hat { \mathbf { c } } )$ , where q denotes the input question or prompt, and ˆc is the CoT reasoning generated in response to q. Intuitively, the quality of the reasoning trace directly influences the likelihood of producing a correct answer – strong reasoning increases this likelihood, while flawed reasoning reduces it. Building upon this property, we introduce a new reward signal – Reasoning Reflection Reward (R3) – designed specifically for open-ended, particularly long-form, generation tasks. R3 is a tokenlevel dense reward signal that measures the consistency between the CoT reasoning generated by the actor model and the reference outcome by placing special emphasis on the key tokens in the reference that reflect the preceding CoT reasoning. We quantify this consistency by leveraging the model’s own self-certainty (Gupta et al., 2024; Kauf et al., 2024) – specifically, the probabilistic likelihood assigned by the LLM to the reference outcome $\mathbf { y }$ conditioned on the prompt $\mathbf { q }$ and its generated CoT reasoning ˆc, i.e., $\pi ( \mathbf { y } \mid \mathbf { q } , { \hat { \mathbf { c } } } )$ . Intuitively, if the model’s reasoning ˆc is correct, the model should assign a high likelihood to the reference outcome y. This likelihood thus serves as a natural reward signal to assess the quality of the generated CoT reasoning. Moreover, since it is grounded in golden answers rather than learned reward models, it offers greater reliability and alignment with the target objective – making it a robust choice for RL training, as recommended in state-of-the-art (Shao et al., 2024; Silver et al., 2017). However, an important oversight in this formulation – also overlooked in recent state-of-the-art work (Chen et al., 2024; Zhou et al., 2025; Tang et al., 2025) – is the uniform treatment of all tokens in the reference outcome y. In practice, this assumption can significantly undermine the effectiveness of the reward signal, and in some cases, even introduce reverse effect – particularly in long-form generation tasks. Next, we present two key empirical observations that reveal why only a selective subset of reference tokens meaningfully contributes to reasoning consistency. # 4.1.1 KEY REASONING TOKEN BLINDNESS IN LOG-PROBABILITY AGGREGATION A model’s self-certainty over a reference outcome token $y _ { j }$ , conditioned on a sampled CoT reasoning trace $\hat { \mathbf { c } } _ { i }$ , can be formulated as the conditional probability $\pi ( y _ { j } \mid { \bf q } , \hat { \bf c } _ { i } , { \bf y } _ { < j } )$ . In practice, we compute this by sequentially appending the reference tokens after the sampled reasoning $\hat { \mathbf { c } } _ { i }$ and measuring the likelihood of each next reference token given the preceding context. Fig. 1 illustrates such a conditional log-probability distribution over the reference outcome tokens for an example prompt from the ParaRev dataset. By way of background, in the ParaRev task, the goal is to revise a given paragraph in response to a set of reviewers’ comments, where not all comments are necessarily relevant to the paragraph (see Section 5.1 for details). To measure the aforementioned consistency between a sampled CoT trace and the reference outcome, we simply begin by computing the aggregate probability of the reference tokens under the model’s predictive distribution, i.e., $\begin{array} { r } { \sum _ { j = 1 } ^ { | { \bf y } | } \log \big ( \pi ( y _ { j } \mid { \bf q } , \hat { \bf c } _ { i } , { \bf y } _ { < j } ) \big ) } \end{array}$ . We then use this aggregate value as reward $( r _ { i } )$ in Eq.2 to compute the corresponding advantage value for the sampled reasoning trace $\hat { \mathbf { c } } _ { i }$ (a part of sampled output $\mathbf { 0 } _ { i }$ ). Our objective is to assign higher advantage scores to higher-quality CoT traces, enabling a cleaner signal in the optimization objective (Eq.1). To evaluate whether this plain aggregate token-level probability reward effectively distinguishes better CoT traces within a group, we conduct a case study using a representative example from the ParaRev dataset. Specifically, we sample 16 outputs in response to a given prompt, where each output consists of a CoT reasoning trace followed by an answer (i.e., a revised paragraph). We then manually rank these outputs based on the quality of their final answers and CoT traces – assessing how well they address the relevant reviewer comments from the prompt and align with the reference revision. Fig. 1 presents two representative CoT reasoning samples from this set, arranged in descending order of quality. The differences in quality are visibly substantial. For each CoT sample, we show the corresponding advantage values computed using the aggregate conditional log-probabilities over the reference tokens. Interestingly, the derived advantage values show only weak correlation with the actual sample quality and in the figure, even rank lower-quality CoT trace above higher-quality one. To understand this unexpected behavior, we closely examine the log-probability distributions over the reference outcome shown in Fig. 1. Most tokens in the reference sequence receive similar logprobability values, regardless of the preceding CoT reasoning. Only a small number of tokens – three in this case – exhibit clear variation in likelihood depending on the prior CoT trace. These reasoning-reflective tokens are the ones that truly encode the effect of the preceding reasoning on the model’s certainty over the outcome. However, since these reflective tokens tend to have lower log-probability values than the bulk of the reference tokens, their influence gets diluted when we compute a sequence-wide aggregate log-probability. As a result, their contribution to the reward for the CoT trace, and thus to the corresponding advantage value is effectively masked. This issue becomes more pronounced when the number of reasoning-reflective tokens is small relative to the total length of the reference outcome. This phenomenon, where critical token-level signals are suppressed by sequence-wide aggregation, has also been observed in other contexts such as model cascading and hallucination detection (Gupta et al., 2024; Chen et al., 2025a). # 4.1.2 WHEN REFERENCE TOKENS COMPENSATE FOR POOR REASONING When computing the reasoning-conditioned probability of the $j ^ { \mathrm { t h } }$ reference token using $\pi ( y _ { j } \mid$ $\mathbf { q } , \hat { \mathbf { c } } _ { i } , \mathbf { y } _ { < j } )$ , we are inherently conditioning not only on the CoT reasoning trace $\hat { \mathbf { e } } _ { i }$ but also on all preceding reference tokens. While this formulation is standard in autoregressive models, it introduces a subtle confound in estimating the model’s certainty: preceding reference tokens can influence the likelihood of subsequent ones, potentially inflating the overall reward. For instance, consider a scenario where the question involves identifying a goal scorer, and the reference answer is “Lionel Messi”. If the model’s CoT fails to identify the correct answer, the probability of “Lionel” conditioned on the flawed reasoning may be low. However, once “Lionel” is appended to the sequence, the probability of “Messi” is likely to be high due to strong lexical and semantic associations. In effect, the reference tokens themselves can progressively compensate for reasoning errors, leading to an overestimation of the quality of the CoT trace. This effect becomes more pronounced as the reference sequence grows longer, particularly when later tokens are highly correlated with earlier ones. Similar issues have been documented in studies of hallucination propagation and teacherforced training within autoregressive generation (Varshney et al., 2023; Bachmann & Nagarajan, 2024). # 4.1.3 R3 ADDRESSES TOKEN-LEVEL SENSITIVITY R3 addresses the two aforementioned challenges in leveraging an LLM’s self-certainty over the reference outcome – conditioned on a sampled CoT reasoning trace – as a reward for reasoning quality. First, it mitigates the issue of reasoning-reflective token blindness in aggregate log-probability computation by explicitly identifying and emphasizing such tokens. A natural but impractical approach would be to identify reasoning-reflective tokens via semantic analysis of the prompt and reference outcome, for instance, using an LLM-as-a-judge framework. However, such methods do not scale across prompts and datasets and inherit the reliability concerns associated with LLM-based judges, as discussed earlier. Moreover, many reasoning-reflective tokens may not present themselves as semantically salient in isolation as illustrated in Fig.1. To address this, we adopt a comparative approach: we identify reasoning-reflective tokens as those whose likelihoods exhibit high variation when conditioned on different CoT traces. That is, in reasoning-conditioned log-probability estimation, the tokens in the reference outcome that show substantial variability across a set of sampled CoT traces are likely to reflect the influence of upstream reasoning. This comparative nature is also emphasized in GRPO paper with a connection to preference-based reward modeling (Shao et al., 2024). For example, in Fig. 1, we highlight three tokens from the reference outcome that exhibit high standard deviation in their log-probabilities across 16 distinct CoT traces. These tokens are not only statistically reflective of reasoning variation but also intuitively important upon qualitative inspection. In R3, we emphasize these reasoning-reflective tokens by weighting each reference token’s log-probability contribution according to its standard deviation. Specifically, the CoT-conditioned likelihood of the reference outcome is computed as: $\begin{array} { r } { \sum _ { j = 1 } ^ { | \mathbf { y } | } w _ { \Delta } ( \sigma _ { j } ) \log \big ( \pi ( y _ { j } \mid \mathbf { q } , \hat { \mathbf { c } _ { i } } , \mathbf { y } _ { < j } ) \big ) } \end{array}$ , where $w _ { \Delta } ( \sigma _ { j } )$ assigns greater weight to tokens with higher standard deviation $\sigma _ { j }$ , thereby amplifying the influence of reasoning-reflective tokens in the reward estimation. Next, we turn our attention to the second challenge: the tendency of reference tokens to compensate for poor CoT reasoning. A natural idea is to propagate the self-certainty (i.e., token-level likelihood) of all preceding reference tokens when computing the certainty of a given token. However, this approach is computationally prohibitive for long sequences and risks propagating misleading certainty from unrelated tokens, potentially leading to underestimation of CoT quality. An alternative is to apply a position-based discounting scheme – down-weighting the contribution of later tokens in the reference outcome under the assumption that they benefit more from cumulative context. Yet this strategy introduces a different failure mode: reasoning-reflective tokens that appear later in the sequence may be unfairly penalized, while non-informative early tokens are disproportionately emphasized. To address these issues, we adopt a more targeted solution that centers around the reasoningreflective tokens. Our insight is that for poor CoT traces, a reasoning-reflective token is likely to receive low model confidence (i.e., probability). When the reference sequence “corrects” this token – appending it during likelihood computation for next tokens – it begins to influence subsequent tokens, effectively initiating a chain of error compensation. We leverage this observation by introducing controlled self-certainty propagation, which begins at reasoning-reflective tokens and decays over a localized window of subsequent tokens. Formally, for each reasoning-reflective token at position $k$ , we define a propagation factor: $P _ { k } ^ { p r o p } ( j ) = p _ { k } ^ { R R T } + ( 1 - p _ { k } ^ { R R T } ) ( 1 - \check { e } ^ { - \gamma d } )$ where $p _ { k } ^ { R R T }$ is the self-certainty (probability) of $k ^ { t h }$ reflection token, $d$ is the distance from the reflection token to the current token $j$ , and $\gamma$ is a hyperparameter controlling the propagation decay from $k ^ { t h }$ token. The final reward formulation incorporates both variance-based token weighting and propagation-aware correction: $\begin{array} { r } { \sum _ { j = 1 } ^ { | \mathbf { y } | } w _ { \Delta } ( \sigma _ { j } ) \log \big ( \pi ( y _ { j } \mid \mathbf { q } , \hat { \mathbf { c } } _ { i } , \mathbf { y } _ { < j } ) \Pi _ { k < j } P _ { k } ^ { p r o p } ( j ) \big ) . } \end{array}$ While the targeted decay-based propagation approach is effective when the number of reasoningreflective tokens is small, it becomes computationally expensive as their proportion increases within the reference outcome. To address this, we propose a more efficient alternative for estimating the self-influence of reference tokens. Specifically, we compute the log-probabilities of reference tokens conditioned on a masked CoT trace, which serves as a baseline estimate of token-level influence originating from the reference itself. For instance, in the earlier football example, the token “Messi” is still likely to receive a high probability due to the presence of the preceding token “Lionel”, even when no reasoning is provided. By subtracting these masked-CoT log-probabilities from those computed with the model-generated CoT, we isolate the self-induced certainty boost by reference tokens. Then, the reward formulation becomes: $\begin{array} { r } { \sum _ { j = 1 } ^ { | \mathbf { y } | } w _ { \Delta } ( \sigma _ { j } ) \left[ \log \left( \pi ( y _ { j } \mid \mathbf { q } , \hat { \mathbf { c } } _ { i } , \mathbf { y } _ { < j } ) \right) - \log \left( \pi ( y _ { j } \mid \mathbf { q } , \mathbf { c } _ { \mathrm { m a x k e d } } , \mathbf { y } _ { < j } ) \right) \right] . } \end{array}$ # 4.2 DIRECT REASONING OPTIMIZATION WITH R3 We now introduce Direct Reasoning Optimization (DRO), a RL-based fine-tuning framework that that employs R3 as its primary reward signal for guiding reasoning quality and dynamic data filtering for open-ended reasoning tasks. # 4.2.1 DESIGN OF DRO Fig.2 presents an overview of DRO, where the model learns to optimize its own reasoning through direct intrenal reward feedback. DRO builds upon the GRPO framework(Shao et al., 2024), which aligns with the group-relative and comparative nature of our core reward, R3. Given a prompt $\mathbf { q }$ , the actor policy $\pi _ { \boldsymbol { \theta } }$ generates a group of outputs, each comprising a CoT trace $\hat { \mathbf { e } } _ { i }$ followed by a final outcome $\hat { \mathbf { y } } _ { i }$ . We replace $\hat { \mathbf { y } } _ { i }$ with the ground-truth reference outcome $\mathbf { y }$ to compute the $\pmb { \mathrm { R } } \ b { 3 } _ { i }$ score for each $\hat { \mathbf { c } } _ { i }$ . To evaluate R3, we use an internal policy $\pi _ { \mathrm { r w d } }$ , instantiated in three variants: (1) statically using the reference policy $\pi _ { \mathrm { r e f } }$ , (2) dynamically syncing with $\pi _ { \boldsymbol { \theta } }$ , and (3) using a lagged version of $\pi _ { \boldsymbol { \theta } }$ . Since R3 only scores the reasoning trace and not the generated final outcome, we observed that models tend to produce verbose completions, e.g., appending explanations at the end of revised paragraph in the ParaRev task. To mitigate this, we apply a length penalty solely on the final outcome: $\begin{array} { r } { r _ { \mathrm { l e n } } ^ { \bar { \beta ^ { } } } ( \hat { \mathbf { y } } , \mathbf { y } ) : = 1 - \beta \cdot \frac { | \mathbf { \nabla } | - | \hat { \mathbf { y } } | | } { | \mathbf { y } | } } \end{array}$ , where $\beta$ controls the strength of the penalty. The final reward is a weighted combination of $\pmb { \mathrm { R } } \ b { 3 } _ { i }$ and the length penalty, which is used to compute the advantage (Eq.2). This advantage is then used in the GRPO objective (Eq.1) to update the model parameters. # 4.2.2 DATA FILTERING AND EFFICIENCY Recent work (Meta, 2025; Muennighoff et al., 2025; Jiang et al., 2025; Ye et al., 2025; Yang et al., 2025; Costello et al., 2025) highlights the critical role of data filtering in reinforcement learning, demonstrating its impact on both data efficiency and downstream task performance. These approaches typically rely on either LLM-as-a-judge frameworks or verifiable reward signals. However, in open-ended reasoning tasks where no reliable verifiers exist, such strategies are not applicable. Moreover, using LLM-as-a-judge would require designing task and dataset-specific prompts, compounding the complexity and inheriting the limitations discussed earlier. To address this, DRO introduces a generic, dynamic data filtering mechanism tailored for open-ended reasoning tasks leveraging R3, enhancing data efficiency during RL-based training without the need for manual prompt engineering or external verification. DRO performs data filtering at regular intervals throughout training, beginning with an initial filtering round before the start of training. Each filtering round is guided by the current policy model $( \pi _ { \boldsymbol { \theta } } )$ and is conducted in two stages: • Filtering Out Beyond-knowledge or Extremely difficult Questions: The latent presence of prerequisite knowledge within the model largely determines the effectiveness of RL-based reasoning training (Ye et al., 2025; Snell et al., 2024; Yue et al., 2025). Accordingly, we begin by filtering out questions that are either excessively difficult or likely beyond the model’s current knowledge. For each question/prompt, we sample $N$ CoT reasoning traces (typically 16 or 32, depending on the setup) using the current policy $\pi _ { \boldsymbol { \theta } }$ . As in R3, we evaluate self-certainty over the reference outcome tokens conditioned on each CoT trace. Instead of log-probabilities, we use token rank $( y _ { j } ^ { ( r ) } )$ as a proxy for prediction difficulty. For each CoT trace ci, we sort the tokens by rank and compute the average of the bottom $\rho \%$ (i.e., highest-ranked, least-confident) tokens: $\overset { \cdot } { a v g } ( \underset { . } { m a x } \rho ^ { i } y _ { j } ^ { ( r ) } )$ . This average reflects how difficult the reference tokens are to predict given the sampled reasoning. If, across all $N$ generations for a given question, none achieve a sufficiently low $a v g ( m a x _ { \rho } ^ { i } y _ { j } ^ { ( r ) } )$ ,i.e., within a predefined top- $k$ threshold, we consider the question either too difficult or outside the model’s current knowledge scope and exclude it from this training round. • Filtering Out Questions with Low Reasoning Variation: In the second stage, we filter out questions that exhibit low variation in the reasoning space, which typically corresponds to overly simple questions (assuming the previous stage has already removed most overly difficult ones). We leverage the R3 scores computed in the prior step using the current policy $\pi _ { \boldsymbol { \theta } }$ . Specifically, for each prompt, we compute the maximum per-token standard deviation across $N$ sampled CoT traces: $\operatorname* { m a x } ( \sigma _ { j } )$ . This value captures the highest degree of reasoning-induced variability in reference token predictions. We then rank all prompts in descending order of $\operatorname* { m a x } ( \sigma _ { j } )$ and remove a proportion of the lowest-ranked samples. The cutoff is determined based on the available training data size and the model’s capacity. In each round of filtering, we carry forward $10 \%$ of data from the previous training set. # 5 EXPERIMENTS # 5.1 EXPERIMENTAL SETUP Datasets. We use the following datasets in our experiments: (1) ParaRev (Jourdan et al., 2025): This dataset contains over 48K original-revised paragraph pairs from scientific papers on OpenReview, along with corresponding reviews. Since many papers undergo multiple revisions, we focus on the initial revision, as it typically reflects the most substantial changes in response to reviewer feedback. As ParaRev does not include the full paper context for each paragraph, which is crucial for reasoning, we extend the dataset by locating the paragraphs in the raw papers and extracting their preceding and following context from CASIMIR (Jourdan et al., 2024). This results in an adapted dataset of 4.8K samples, and we follow a $9 5 \% / 5 \%$ train-test split. (2) FinQA (Chen et al., 2021): A dataset focused on numerical reasoning over financial data, comprising over 8K samples with expert-written context, questions, reasoning programs, and answers. For our RL training, we use only the context, questions, and answers, adhering to the original train-test split. Training. We conduct DRO training on the DeepSeek-R1-Distill-Qwen-7B and $\mathtt { 1 4 B }$ models. A learning rate of $1 . 0 \times e ^ { - 6 }$ is used with a warmup ratio of 0.2 and a “constant with warmup” learning rate scheduler. During each training step, the actor model generates 16 responses per question using a temperature of 1.0, top- $\cdot \mathbf { p }$ sampling with $p = 0 . 9 5$ , a repetition penalty of 1.0, and a maximum completion length of 10,000 tokens for FinQA and 8,000 tokens for ParaRev. We process 256 questions per step for FinQA and 128 for ParaRev. For GRPO optimization, we adopt the loss function from Liu et al. (2025a), using scaled rewards, masking for truncated completions, and an upper clipping coefficient of $\epsilon _ { \mathrm { h i g h } } = 0 . 2$ . While prior studies typically set the entropy regularization weight $\beta = 0$ , we empirically found $\beta = 0 . 0 0 1$ to improve training stability and convergence. Training is conducted across three nodes, each with $8 \times$ NVIDIA A100 GPUs. We utilize HuggingFace TRL for reinforcement learning, DeepSpeed for distributed training, and vLLM for rollout generation and R3 computation. Metrics. For the FinQA task, where answers are verifiable, we use numerical correctness with a $2 \%$ tolerance. For the ParaRev task, we adopt pairwise win rate as the primary evaluation metric. To compute win rates, we adapt the AlpacaEval prompt to the revision setting by providing the paper context, reviewer comments, original paragraph, and reference revision for each sample. Our validation indicates that this prompt yields a $9 4 . 6 \%$ win rate for expert revisions over GPT-4o revisions, demonstrating strong alignment with human preferences. The full prompt template is provided in Appendix A. To mitigate potential self-enhancement bias (Zheng et al., 2023), we use both GPT-4o and Claude 3.7 Sonnet as judges. Baselines. We mainly compare DRO with the following baselines in our evaluation: (1) Base Models: The off-the-shelf DeepSeek-R1-Distill-Qwen-7B (for FinQA) and $\mathtt { 1 4 B }$ (for ParaRev) models without RL on the specific tasks. (2) ROUGE (ParaRev): For ParaRev, although the outcomes are not directly verifiable, we use ROUGE-1 F1 score (Lin, 2004) as the reward in GRPO to represent RL with a standard automatic metric as a proxy verifier. (3) Correctness (FinQA): For FinQA, where outputs are math-like and easily verifiable, we use binary correctness (within a $2 \%$ tolerance) as the reward in GRPO to serve as an upper bound where ideal outcome verification is feasible. (4) Aggregate: To assess the efficacy of R3, we include a set of baselines that use the aggregate certainty across all tokens as the reward. As these baselines share the same training workflow as DRO, we denote them as DRO-Aggr. Specifically for ParaRev, we introduce DRO-Aggr-S and DRO-Aggr-R to represent strict and relaxed length control, respectively, each using different $\beta$ in the length reward to study its impact. (5) GPT-4o: A strong baseline using a significantly larger model. # 5.2 RESULTS # 5.2.1 PARAREV DRO with R3 improves reasoning quality and alignment. As shown in Table 1, DRO-R3 achieves higher win rates against GPT-4o than all other variants, outperforming the base model by $8 . 0 \%$ (GPT judge) and $1 0 . 2 \%$ (Claude judge), and even surpassing GPT-4o itself despite being a much smaller model. It also generates outputs with lengths closer to the reference revisions, indicating more faithful and efficient edits. Given the known length bias in LLM-based evaluators (Zheng et al., 2023), this improvement further reflects better alignment with human preference. Table 1: Win rates against GPT-4o on ParaRev Figure 3: ParaRev training insights. R3 outperforms ROUGE-based rewards. Compared to the ROUGE-rewarded baseline, R3 yields a win rate improvement of $1 6 . 0 \%$ (GPT judge) and $2 0 . 7 \%$ (Claude judge). We observe that the ROUGE-trained model frequently leaves the paragraph unchanged, likely due to the reward favoring textual overlap, resulting in shorter outputs similar in length to the original paragraph. This behavior harms revision quality. R3 also outperforms aggregate-certainty rewards. Compared to aggregated certainty rewards, R3 leads to consistently higher win rates regardless of length control settings. Against the same base model, DRO-R3 achieves up to a $4 . 2 5 \times$ improvement over DRO-Aggr-R, highlighting the importance of reasoning-reflective token weighting. Furthermore, strict length control (DRO-Aggr-S) degrades performance, suggesting that rigid enforcement of output length may suppress effective reasoning and degrade revision quality. Training insights. (1) R3 stimulates longer reasoning generation: As shown in Figure 3a, R3 encourages the model to produce longer CoTs, with generation length growing steadily from 1k to over $2 . 5 \mathrm { k }$ tokens during training. In contrast, aggregate-certainty rewards lead to early collapse below 100 tokens, as the model learns to omit reasoning due to the misleading reward signal. (2) Implicit improvement in textual similarity: Figure 3b shows that, despite ROUGE not being part of the reward, DRO with R3 substantially improves ROUGE-L F1 from 0.4 to 0.7 in the early stage of training, suggesting that optimization toward reasoning-reflective tokens also results in better surface-level alignment. (3) Filtering accelerates and stabilizes training: As shown in Figure 3c, on-the-fly data filtering in DRO reduces training time by $4 5 \%$ while achieving comparable final reward scores and smoother convergence, demonstrating its efficiency and robustness. # 5.3 FINQA R3 achieves improvement comparable to rewards from an ideal verifier. On FinQA, a math-like task with reliably verifiable outcomes, DRO-R3 achieves gains comparable to those obtained using correctness-based rewards. Specifically, as shown in Table 2, it falls only $0 . 9 \%$ short on $\mathrm { P a s s } @ 1$ but outperforms the correctness baseline on $\operatorname* { P a s s } ( \varnothing \mathrm { k }$ for $k \geq 2$ . This result highlights that R3 can match the benefits of correctness-based rewards without access to a reliable verifier, demonstrating its potential for tasks where ideal outcome verification is difficult to obtain or not well-defined. Table 2: Pass@k on FinQA Figure 4: FinQA training insights. R3 outperforms aggregate-certainty rewards even in short-outcome tasks. Although FinQA involves relatively short outputs where most tokens appear to contribute directly to the final answer, R3 still outperforms the aggregate-certainty reward. Compared to the base model, DRO-R3 achieves a $4 . 1 5 \times$ higher improvement than DRO-Aggr. This indicates that reasoning-reflective tokens are not exclusive to long-form generation. For example, in math-like tasks, tokens such as the decimal point “.” may reflect reasoning quality more than trailing digits. Training insights. (1) Steady reward improvement and stabilization: As shown in Figures 4a and 4b, DRO consistently improves the R3 reward while reducing its standard deviation across sampled reasoning traces, indicating both stronger and more stable reward attribution over time. (2) Emergence of longer reasoning: Generation length steadily increases from 1k to over 3k tokens (Figure 4c). Interestingly, while the R3 improvement slows around step 6 (Figure 4a), the reasoning length continues to grow almost linearly. This divergence suggests that as the reward signal begins to saturate, the model continues to elaborate its reasoning, potentially exploring richer explanations or extended self-reflection beyond what R3 explicitly rewards. This behavior remains effective, as the R3 continues to improve gradually thereafter.
Recent advances in Large Language Models (LLMs) have showcased impressive reasoning abilities in structured tasks like mathematics and programming, largely driven by Reinforcement Learning with Verifiable Rewards (RLVR), which uses outcome-based signals that are scalable, effective, and robust against reward hacking. However, applying similar techniques to open-ended long-form reasoning tasks remains challenging due to the absence of generic, verifiable reward signals. To address this, we propose Direct Reasoning Optimization (DRO), a reinforcement learning framework for fine-tuning LLMs on open-ended, particularly long-form, reasoning tasks, guided by a new reward signal: the Reasoning Reflection Reward (R3). At its core, R3 selectively identifies and emphasizes key tokens in the reference outcome that reflect the influence of the model's preceding chain-of-thought reasoning, thereby capturing the consistency between reasoning and reference outcome at a fine-grained level. Crucially, R3 is computed internally using the same model being optimized, enabling a fully self-contained training setup. Additionally, we introduce a dynamic data filtering strategy based on R3 for open-ended reasoning tasks, reducing cost while improving downstream performance. We evaluate DRO on two diverse datasets -- ParaRev, a long-form paragraph revision task, and FinQA, a math-oriented QA benchmark -- and show that it consistently outperforms strong baselines while remaining broadly applicable across both open-ended and structured domains.
[ "cs.CL", "cs.AI", "cs.LG" ]
# 1 Introduction Modern data-intensive applications continuously generate diverse types of data stored in data lakes [16], encompassing structured data (e.g., tables, graphs), semi-structured data (e.g., JSON, HTML), and unstructured data (e.g., text, images, video). This diversity introduces a critical challenge of data variety [15], underscoring the need for integrated analysis across multi-modal datasets, which often contain complementary information essential for extracting timely and valuable insights. For example, in healthcare scenarios, physicians simultaneously analyze heterogeneous patient data, such as X-ray images and textual diagnostic reports, to support accurate and timely diagnoses. In e-commerce, users seek products by jointly exploring visual content, textual descriptions, and structured metadata. Recently, the rapid advancement of Large Language Models (LLMs) has opened new opportunities for analyzing multi-modal data at the semantic level through natural language interactions. For example, GPT-4 [2] demonstrates the ability to reason over text, images, and JSON data within a unified conversational interface. However, despite these advancements, current LLM-based approaches have yet to fully unlock the potential of multi-modal data analytics, primarily due to the following limitations: Figure 1: An Illustrate Example on MCP-based Multi-Modal Data Analytics. Limitation 1: Limited Query Expressiveness. Existing approaches typically support querying only a subset of data modalities and suffer from limited expressiveness in capturing complex user intents. For example, ELEET [24] supports analytics over text and tabular data; Palimpzest [11] is restricted to text and image processing; and AOP [25] primarily targets document and text data. Underlying these systems are three main strategies for query translation: (1) mapping natural language (NL) directly to modality-specific operations [25], (2) translating NL to SQL using NL2SQL techniques [13, 6], or (3) requiring users to write declarative SQL-like query languages [11]. However, these methods remain inadequate for data lakes, where user queries are often ambiguous and span a wide range of data modalities. Limitation 2: High Inference Overhead. Most existing approaches rely on a single, unified LLM to process heterogeneous data modalities, which leads to significant inference overhead. This issue is attributed to the computational cost of state-of-the-art LLMs, such as GPT-4, which contains over a trillion parameters. Moreover, queries that span multiple modalities introduce complexity in query planning and execution, further impacting efficiency. From a query processing perspective, users typically expect low-latency responses, making it imperative to reduce inference overhead. Limitation 3: Knowledge and Data Staleness Data in the lake is often incomplete or becomes outdated over time, which undermines the freshness and reliability of analytical results. Moreover, the knowledge embedded in LLMs can also become stale as new data and emerging patterns (e.g., novel diseases or evolving user behavior) are not reflected in the original training data. For example, an LLM may fail to recognize symptoms of a newly discovered disease or interpret related medical images correctly. Existing approaches largely ignore this issue and operate under static assumptions. Therefore, it is essential to develop mechanisms for augmenting stale data and refreshing LLM knowledge to ensure up-to-date and contextually relevant analytics. Model Context Protocol (MCP) [8] is a novel framework that standardizes the interaction among AI agents (e.g., knowledge-grounded LLMs), external tools (e.g., database engines, function calls), and data resources (e.g., structured and unstructured sources). By defining a unified interface, MCP enables seamless integration of real-time inputs, environmental variables, and domain-specific constraints, thereby supporting robust and scalable decision-making across diverse applications. In the context of multi-modal data analytics, MCP provides an effective abstraction layer over heterogeneous data lakes. Users can interact with AI agents through natural language, while the analytics workload is offloaded to dedicated MCP servers, each tailored to handle a specific data modality. Moreover, with the growing ecosystem of MCP servers, thousands of off-the-shelf components will be readily available for processing multi-modal data efficiently. Building upon the capabilities of MCP, we propose a novel multi-modal data analytics system, named TAIJI, to address the aforementioned limitations. TAIJI introduces a new architecture that leverages MCP to offload modality-specific analytical tasks to dedicated MCP servers. As illustrated in Figure 1, TAIJI differs fundamentally from conventional approaches that rely on a single, unified LLM to process all modalities (e.g., M1, M2, M3 represent different data modalities). Instead, TAIJI adopts a client–host architecture: the client receives an NL query, and a host-side LLM agent interprets the user intent. This agent decomposes the query into a set of modality-specific operators and formulates a structured query plan. Each sub-plan is dispatched to an appropriate MCP server, where a tailored LLM, which is optimized for the corresponding modality, executes the analysis. This modular and distributed design yields several key advantages: (1) higher inference accuracy by leveraging specialized models, (2) improved scalability through parallelism across servers, and (3) significantly reduced inference overhead compared to monolithic LLM-based solutions. In summary, this paper has the following key contributions: (1) MCP-Based Multi-Modal Analytical System. We propose a novel multi-modal data analytics frame work built upon the Model Context Protocol (MCP), which addresses the limitations of existing LLM-based systems. The core idea is to assign each MCP server a tailored LLM optimized for a specific data modality, embracing the principle that “one size does not fit all”. (2) Semantic Operator Hierarchy. We introduce a hierarchical set of semantic operators that spans struc tured, semi-structured, and unstructured data. This design enables TAIJI to support advanced tasks such as cross modal joins, while maintaining compatibility with existing operators for relational data, documents, graphs, images, audio, and video, thus avoiding “reinventing the wheels”. (3) Query-Driven Model Optimization and Data Discovery. We develop a query-driven fine-tuning strat egy to optimize the reasoning ability of LLMs on each MCP server. Moreover, we propose a unified embedding based semantic representation combined with a hybrid indexing mechanism for multi-modal data discovery. (4) Dynamic Knowledge and Data Refreshing. We propose a twofold updating mechanism to ensure freshness of both the data and the LLMs. First, we leverage deep research techniques to enrich the data lake with the latest documents and web content. Second, we introduce a lightweight editing mechanism (supporting insert, update, and delete operations) augmented by machine unlearning techniques to refresh LLM knowledge. # 2 Preliminaries # 2.1 Problem Definition Multi-Modal Query Processing (MMQP). Given a natural language (NL) query $Q$ over a collection of datasets $\mathbb { D }$ , where each dataset $D _ { i } \in \mathbb { D }$ is associated with a modality $M _ { i } \in \mathbb { M }$ , the goal is to compute a result set $R$ that satisfies the query predicates, where each result tuple $r \in R$ may contain data items spanning multiple modalities in $\mathbb { M }$ . To evaluate the performance of MMQP, we adopt three standard metrics: (1) recall, which measures the completeness of the retrieved results, (2) precision, which quantifies the ratio of the correct ones among the returned results. and (3) latency, which reflects the query execution efficiency. In general, the MMQP problem has the following challenges: Challenge 1: Cross-Modality Query Planning. Unlike traditional query planning in relational databases that focuses solely on relational operators, orchestrating operator pipelines across multiple data modalities is more complex. This complexity arises from the heterogeneous nature of operators and data formats. Therefore, MMQP requires an effective yet lightweight mechanism to generate reasonable execution plans. Challenge 2: Efficient Inference on Large-Scale Data. Compared with using a general-purpose LLM with a massive number of parameters, employing a tailored, relatively lightweight LLM for modality-specific data can significantly reduce inference overhead. However, this approach still encounters scalability challenges when dealing with large volumes of data, particularly unstructured data such as text, images, or videos. Challenge 3: Data and Knowlege Freshness. High-quality data in the data lake is often insufficient to precisely and comprehensively answer user queries. Therefore, it is crucial to continuously update both the data lake and the LLM knowledge within each MCP server. However, open-domain data sources are vast and challenging to explore, making integration into the data lake cumbersome and resource-intensive. # 2.2 Related Work In recent years, LLMs have made remarkable advances in natural language understanding, generation, and reasoning, opening up new opportunities for multi-modal data management. By expressing their query requirements in natural language, users can interact with systems that leverage LLMs to interpret query intent and dynamically orchestrate external databases, knowledge bases, or APIs to execute complex analytical tasks. CAESURA [23] (also known as ELEET [24]) utilizes LLMs as multimodal query planners that translate NL inputs into executable multimodal query plans. Its key idea is leveraging LLMs to understand user intent and dynamically construct efficient query workflows tailored to diverse data modalities. Building upon this, Gorilla [18] enhances LLMs’ reasoning and tool-use capabilities by incorporating retrieval-augmented generation (RAG), enabling real-time access to external knowledge bases and APIs during multimodal query processing. Toolformer [21] further advances LLM tool integration by fine-tuning models with limited examples, empowering them to autonomously invoke external APIs, such as database engines and computational tools, during generation. This overcomes LLMs’ traditional limitation to static text generation, enabling dynamic interaction with external data sources for complex analytical tasks. ThalamusDB [9] introduces a hybrid architecture combining a dedicated query planner with deep learning inference. It allows SQL queries to include NL predicates, offloading parts of the execution to neural models for image and text processing, while leveraging approximate query processing (AQP) to balance efficiency and accuracy. However, it still relies on manual annotations, which may hinder scalability. PALIMPZEST [12, 11] extends declarative query languages to express AI workloads, generating optimized plans that jointly consider the cost of both AI inference and relational operations. AOP [25, 26] proposes defining semantic operators over documents, text, and structured tables, and employs LLMs to iteratively generate executable query plans for multi-modal content. However, existing works have three main limitations: First, they support only a few data modalities and are hard to extend, i.e., adding new modalities or operators often requires heavy manual work. In contrast, TAIJI supports a wide range of multimodal operators with a scalable MCP-based architecture. Second, most systems use a single LLM to handle all data types, which leads to low efficiency and accuracy. TAIJI instead uses tailored LLMs for different modalities, improving performance. Third, they focus on static data and ignore updates. TAIJI addresses this by automatically enriching the data and refreshing model knowledge over time. # 3 TAIJI’s Overview Figure 2 presents an overview of TAIJI, which consists of three main components: MCP Client Manager, MCP Server Manager and MCP Augmentor. In the following, we introduce them in details. # 3.1 MCP Client Manager # 3.1.1 NL2Operator Understanding user intent is critical for accurate query answering. However, neither SQL-like declarative languages nor UDF-based procedural languages are well-suited for this task. This is due to two main challenges: First, NL queries are inherently ambiguous and difficult to interpret accurately. Second, translating NL queries into complex declarative or procedural languages increases the likelihood of inaccuracies or errors. To address this, we propose NL2Operator, a method that defines a hierarchical set of semantic operators, each linked to specific MCP servers. An LLM-based agent maps user queries to these operators based on query intent. For example, semi-structured data processing is handled by an intermediate MCP server, which can either process the query directly or delegate it to specialized sub-servers (e.g., for JSON, HTML, XML). This hierarchical design offers three advantages: (1) Scalability and flexibility, by modularizing the processing across different servers; (2) High concurrency support, through distributed workload balancing; (3) Simplified translation, by mapping NL to high-level modality-specific operators, reducing the burden on the LLM. Figure 2: An Overview of TAIJI. # 3.1.2 Query Planner The goal of query planner is to find the execution plan with the lowest cost, thereby improving overall efficiency. Based on a hierarchy of semantic operators, we construct a directed acyclic graph (DAG) to represent the query workflow. We then implement a sampling-based cost estimation optimizer, which includes selectivity estimation, cost modeling, plan evaluation, and final plan selection. The optimization process works as follows: First, an operator cost sampler collects runtime statistics, such as per-tuple processing time and selectivity, for each operator. Using this data, the optimizer builds a latencybased cost model to evaluate candidate plans. The plan with the lowest estimated execution time is selected for execution. This sampling-based approach supports accurate, data-driven cost prediction and adaptive plan selection. It offers two key advantages: (1) it maintains reliable cost estimates under different data distributions and runtime conditions, and (2) it produces more deterministic plans compared to LLM-based query planning. # 3.1.3 Iterative Operator Optimizer Each MCP server is responsible for processing data from a specific modality, autonomously interacting with its underlying data source, whether structured (e.g., SQL-based) or unstructured (e.g., document corpus), via an Iterative operator optimization [10, 22]. Unlike traditional pipelines that rely on a single retrieval pass, TAIJI introduces a bidirectional communication channel between the MCP server and the MCP client (or host). This feedback loop enables the server to request prompt refinements or clarifications from the upstream controller when retrieval results are insufficient, supporting a dynamic, feedback-driven, and iterative optimization process. An example of the iterative architecture in TAIJI is illustrated in Figure 4. The process starts when the MCP User Question: “Which suppliers delivered electronics to California in the last quarter of 2023?” Iteration 1 Iteration 2 Iteration 3 Query: Refined Query: Refined Query: SELECT supplier_name FROM deliveries SELECT s.name FROM deliveries d SELECT s. RO deliveries AND delivery_date BETWEEN ‘2023-10-01' AND '2023-12-31'; Answer: Answer: Answer: Foxconn and Pegatron. Foxconn, Pegatron, and Jabil Inc. Foxconn, Jabil Inc., and Flex Ltd. delivered electronics (such as mobile phones and Analysis: Analysis: circuit boards) to California in Q4 2023. Possibly oversimplified—the Product categories might be Analysis: supplier_name may not live more granular in reality, Answer is correct! directly in deliveries. Also, such as "Mobile Devices" or "Electronics" is vague. "Consumer Electronics." server receives a sub-task from the central LLM agent. Guided by the task’s semantics and structure, the server formulates an initial query and retrieves candidate results from its local database. These results are evaluated along multiple dimensions, including coverage, redundancy, ambiguity, and informativeness. If deficiencies are detected (e.g., sparse or misaligned results), the server engages a reasoning-based refinement loop to revise the query and improve alignment with the task’s underlying intent. This iterative cycle continues until the result set meets a predefined confidence threshold. Moreover, we employ a curriculum learning strategy that gradually increases task complexity, from simple key-value lookups to multi-hop reasoning over relational and multimodal data, thus guiding the agent through progressively more abstract levels of query planning and evaluation. # 3.2 MCP Server Manager # 3.2.1 Embedding Indexing For high-dimensional vector data, traditional vector indexing methods [3] typically construct a graph structure by linking neighboring data points based on nearest-neighbor distance. However, when metadata-based filters are applied, many of these connections become invalid, resulting in disconnected subgraphs. This leads to search failures, as the index may no longer be able to reach the true nearest neighbors. To address this, TAIJI introduces a filter-aware vector indexing approach designed to maintain search effectiveness even under filtering constraints. The key idea is to selectively augment the vector graph by adding edges that connect only those data points that comply with the filtering conditions. This ensures that the search process remains efficient while achieving high recall. Figure 4 illustrates the core concept. The key components of our method include: (1) Condition-aware graph augmentation: Dynamically reinforce valid connections among filter-compliant nodes to ensure index traversability even after metadata-based filtering is applied. (2) Hybrid search robustness: Maintain a balance between filtering precision and vector search accuracy by preserving essential traversal paths within the index. (3) Recall preservation: Ensure that the augmented index structure delivers competitive nearest-neighbor retrieval performance, comparable to that of unfiltered searches. This approach extends traditional vector search to support constrained queries where both semantic similarity (via embeddings) and structured criteria (via metadata) must be jointly satisfied. Figure 4: Embedding of Multi-Modal Data. # 3.2.2 Multi-Modal MCP Servers Multi-Modal MCP Servers deal with heterogeneous data ecosystems by unifying structured (e.g., relational databases), semi-structured (e.g., JSON logs, XML), and unstructured data (e.g., text, images, sensor streams) under a single adaptive framework. Leveraging LLMs and multi-modal AI architectures, these servers dynamically interpret, classify, and contextualize diverse data formats through techniques such as natural language processing (NLP) for unstructured text, computer vision for visual data, and graph-based reasoning for structured relationships. By embedding semantic understanding into the MCP layer, the system autonomously generates metadata, enforces cross-modal data governance policies, and enables federated queries that bridge tabular sales records, semi-structured IoT telemetry, and unstructured social media feeds. For instance, an LLM-powered MCP Server could correlate customer sentiment (extracted from unstructured reviews) with structured transactional data to optimize supply chain decisions, while simultaneously parsing semi-structured maintenance logs to predict equipment failures. # 3.3 MCP Augmentor # 3.3.1 Data Augmentor After executing queries on modality-specific databases, each MCP server autonomously initiates an update procedure involving both data augmentation and model capability enhancement, as shown in Figure 5. In the data augmentation phase, TAIJI adopts a query-driven retrieval-then-synthesis strategy to keep the data lake semantically enriched and aligned with the latest developments. Upon completion of a user query, the MCP server triggers targeted information harvesting routines. These leverage both web crawlers and structured API-based agents configured to access dynamic and authoritative sources such as arXiv papers, GitHub repositories, Stack Overflow, and enterprise technical documentation. The collected documents undergo a multi-stage processing pipeline designed to transform unstructured content into validated, semantically indexed knowledge. (1) Structural Reconstruction: The pipeline begins with document structure parsing, utilizing tools such as ScienceParse [7] and GROBID [14] to extract key structured elements, including titles, abstracts, section headers, equations, tables, figures, and code snippets. These tools employ a combination of rule-based heuristics and machine learning techniques to reconstruct the hierarchical organization of scientific and technical documents from raw formats such as PDFs and HTML. (2) Redundancy Elimination: In the redundancy elimination stage, a hybrid strategy combines MinHashbased fingerprinting for fast approximate set similarity detection with dense embedding-based retrieval. Specifically, documents are first fingerprinted using MinHash signatures generated over token-level n-grams. In paral User Question: “Who is the current president of the United States? ” DeepResearch C 自 DataLake Augmentation Model Refreshing Query C Query-Driven Reinforcement President, United States… Insert C Donald Trump assumed office on Januar Knowledge Enrichment 20, 2025. Delete 心 Joe Biden is the president of the United Machine Unlearning States lel, dense vector embeddings are computed using models like Sentence-BERT [20], and clustered using HNSWbased approximate nearest neighbor search (via libraries like FAISS) to detect semantically similar passages. Duplicate or near-duplicate entries are then filtered out based on a configurable similarity threshold. (3) Entity Extraction: Next, domain-adapted named entity recognition (NER) is applied using fine-tuned transformer-based models trained on labeled datasets from scientific and software domains. To resolve synonyms and disambiguate entities across sources, we apply context-aware entity linking using string similarity, co-occurrence statistics, and external knowledge bases (e.g., DBpedia, PapersWithCode APIs). To identify novel concepts and facts, we compute concept-level hashes using SimHash, which captures the semantic footprint of passages. These hashes are used to cluster conceptually similar content and measure deviation from existing entries in the data lake. We also apply graph-based semantic clustering over the extracted entities and their relationships to reveal emergent topics not previously indexed. (4) Data Augmentation: In the final augmentation and indexing stage, each candidate entry is validated across multiple independent sources to ensure factual consistency and reliability. Only entries that are corroborated—i.e., referenced in at least two unrelated sources—are retained. The resulting validated knowledge units are indexed along multiple axes: modality (e.g., text, code, image), source credibility, and temporal metadata (e.g., crawl time, publication date). This indexing supports temporal knowledge reasoning, such as prioritizing more recent findings or applying decay-weighted relevance during downstream model retrieval. # 3.3.2 Model Refreshing In the Model Refreshing phase, to align model knowledge with the evolving data lake, we introduce a modalityspecific parameter update mechanism tailored to three data operations: querying, insertion, and deletion. Thanks to the built-in subscription function of MCP server, the MCP client can monitor the changes of data resources via subcription, then we can conduct the model refreshing. (1) Query-Driven Reinforcement: To enhance model performance based on frequently asked or emergent user intents identified from query logs, we first perform an in-depth analysis to extract the most common and critical intent patterns. These patterns are then leveraged to generate task-specific training samples. Furthermore, to ensure that these synthesized samples effectively contribute to the model’s learning process, we apply importance sampling during the fine-tuning phase. Importance sampling assigns a higher weight to these specific training examples, based on their frequency and relevance in real-world use cases. This mechanism helps the model prioritize the learning of high-value user intents while preventing overfitting to less common or irrelevant patterns. By combining these methods, we significantly improve the MCP server’s ability to understand and respond to emergent user needs in real-time applications. (2) Insert via Knowledge Enrichment: When new knowledge is ingested into the data lake, it undergoes a structured transformation where relevant text spans or structured tuples are converted into instruction-style training samples. These samples are then systematically appended to the fine-tuning dataset. The conversion process involves identifying key knowledge components and framing them as prompts and expected responses, ensuring that the newly introduced information is aligned with the model’s reasoning mechanisms. By incorporating these structured samples, the model is exposed to explicit tasks that require reasoning over the updated knowledge, enabling it to more effectively process and apply this new information during inference. This method directly enhances the model’s capacity for knowledge integration, thereby improving its performance on tasks that necessitate reasoning over both the pre-existing and newly added knowledge. Furthermore, this fine-tuning approach mitigates the risk of knowledge forgetting, as the model learns to consistently reference the most recent data alongside the foundational knowledge it was initially trained on. (3) Deletion via Machine Unlearning: To address the removal of obsolete or sensitive knowledge, we employ advanced techniques such as gradient ascent unlearning or influence function-based data deletion to effectively erase the target information without compromising the integrity of the remaining model knowledge [27]. Specifically, we first compute data influence scores, which quantify the contribution of individual training samples to the model’s predictions. These scores are utilized to identify the most influential data points that need to be removed. Subsequently, we employ a fine-tuning strategy where negatively-weighted gradients are applied to the model, directing it to unlearn the targeted knowledge. This process is executed in a manner that is mindful of the need to preserve the model’s retained knowledge, ensuring that the unlearning of specific information does not lead to catastrophic forgetting of other critical data or degrade the overall performance [5]. The integration of these techniques enables a controlled and efficient forgetting mechanism that minimizes interference with the model’s pre-existing knowledge, making it an ideal approach for handling sensitive or outdated information. This unified update mechanism ensures that each MCP server maintains alignment between its internal LLM and its evolving modality-specific data lake, enabling accurate, efficient, and up-to-date multi-modal response. In particular, it facilitates synchronized parameter tuning and schema-aware knowledge injection across modalities such as vision, audio, and text, thereby preventing semantic drift and enhancing the consistency of crossmodal representations. This design also supports incremental updates without requiring full model retraining, significantly reducing computational overhead while preserving reasoning fidelity. # 4 Preliminary Experiments In this section, we evaluate the performance of a prototype of TAIJI, demonstrating the efficiency and effectiveness of the proposed MCP-based architecture on handling multi-modal data. # 4.1 Experimental Setup Dataset. We use the Craigslist furniture dataset [9], which contains both relational and image data, to evaluate execution accuracy and latency of TAIJI. The dataset comprises 3000 listings from the “furniture” category in website craigslist.org, detailing items such as sofas, tables, and chairs. It consists of two tables: furniture and image. The furniture table includes information of each entity; with each furniture has one or more images related to it, the image table contains the path of images corresponding to each furniture. We design three queries to verify that TAIJI can demonstrate advantages concerning varied selectivity and task complexity. For simplicity, the query plan is fixed by performing a filter on the furniture table and then conducting matches on images with specified predicates. The intermediate result size refers to the size of set filtered by relational predicates, and image predicate refers to the image classification task. Table 1 gives the summary of the workload. Table 1: Summary of the Workload MCP Client Setting and Server Setting. As introduced before, we implement a disaggregated MCP-based data analytical system. Specifically, we leverage the ChatGPT API [1] in the MCP client, which is built upon the GPT-4.1 architecture [17]. Additionally, We use the Qwen2.5-VL-7B [4] as an MCP server to analyze complex images. We also deploy another MCP server [19] to manage the structured data and semi-structured data based on a PostgreSQL database. Baseline. We compare our method to a pure GPT-4.1 model, which handles the query translation and data analysis altogether. On the contrary, our approach splits the task to three sub-tasks with MCP: query translation, table filtering, and image analysis. Note that as GPT-4.1 is not good at processing tabular data, we utilize PostgreSQL to help it handle the table filtering for a fair comparison. Evaluation Metric. For the evaluation of TAIJI, we employ three metrics: recall, accuracy and latency, which are respectively employed to evaluate correctness and efficiency. We compare the result set retrieved by client with the ground truth, to compute recall and precision. We also evaluate latency of performing the queries in an end-to-end fasion. # 4.2 Experimental Results Figure 6: Precision Evaluation. Figure 7: Recall Evaluation. Figure 8: Latency Evaluation. As shown in Figure 6 and Figure 7, TAIJI also outperforms GPT in accuracy (for Q2) and recall (on Q2 and Q3). In specific, TAIJI achieves an accuracy of $8 5 \%$ on Q2 while GPT has an accuracy of $6 5 \%$ .; TAIJI has a recall of $71 \%$ and $94 \%$ on Q2 and Q3, respectively while GPT has a s a recall of $6 5 \%$ and $89 \%$ . Take Q2 as an example where TAIJI is better than GPT both in precision and recall, we found that it is hard to find the black chair from the candidate images, the reason is that GPT can hardly figure out black chair and black sofa. Figure 8 depicts the latency between TAIJI and GPT-4.1. It is clearly visible that our approach is largely faster than GPT. As the intermediate results increase (i.e., Q1-Q3), the performance gap increases. On average, TAIJI improves the latency by $43 \%$ . Take Q3 as an example where the intermediate size has reached up to 347, GPT is too heavy to offer high efficiency with its trillion parameters. Implications of MCP-based Architecture over pure LLM. Through the preliminary experiments, we have the following insights. First, compared with pure LLM, MCP-based architecture can judiciously select and harness most proper LLMs to process multi-modal data. For instance, GPT may be excel at text processing and query translation, but it turns out that Qwen is better than GPT on image processing. Second, leveraging a tailored LLM (i.e., Qwen) with a much smaller number of parameters can not only improve the accuracy, but also it can result in a much lower inference overhead. This implies that, MCP-based architecture is very promising to combine many small LLMs for a wide range of data modality. Third, since MCP-based architecture can offload a sub-task to a local MCP server, it further reduces the burden of the centric LLM. Such an observation motivates us to explore more optimizations in the future, such as asynchronous execution, task scheduling, etc. # 5 Challenges and Opportunity MCP-based Data Security. Despite the salient features discussed in the previous sections, a major challenge of MCP-based data analytic is how to preserve the data security while performing the LLM-empowered data analytics. This is because MCP servers are developed by different vendors and contributors all over the world, and analyzing the plain text/documents in the data lake poses high risks of data leakage. For instance, the MCP SDK or build-in LLM agents may upload the data to the web server. Therefore, it calls for new mechanisms to enable privacy-preserved data transmission and analytics. On the one hand, the data transmission tunnel should be secured among MCP host/clients, MCP servers, and resources. On the other hand, it shoud avoid accessing the plain text, other alternatives should be explored, such as query processing over ciphertext directly. However, it is challenging to balance the trade-off between query efficiency and data privacy. Hybrid Deployment Optimization. In addition to the local deployment of MCP servers, it is also possible to deploy remote MCP servers, resulting in a hybrid deployment. Hence, it calls for new methods that can harness the capability of hyrid deployment. However, it is challenging to effectively determine the strategy of data placement and to migrate the data among the multiple MCP servers. Moreover, it is also hard to efficiently discover data of high quality due to the hugh search space and high latency. Cost-Aware Model Mangement. As more and more MCP servers are involved, it has become a challenge to manage the multiple LLMs and their evolvement. The reason is two-fold. First, existing approaches [24, 9, 25] only care about the query performance, while neglecting the dollar cost of LLMs’ invocation as they charge for tokens. Second, model knowledge can also be obsolete as discussed before, and performing the fine-tuning also incur significant cost, thus it is challenging to balance the knowledge freshness and training cost. As a result, it is required to judiciously select and fine-tune the LLM models with cost-aware optimization. Multi-Modal Database Benchmark. Since there lacks a unified and standard multi-modal database benchmark, existing approaches basically evaluate the performance on different datasets and workloads. Hence, there is a pressing need to have a new multi-modal database benchmark that covers various data modality and workloads. However, it is challenging to collect or generate multi-modal data and workloads with both fidelity, utility, and generality, even for one modality, there could be numerous variants.
The variety of data in data lakes presents significant challenges for data analytics, as data scientists must simultaneously analyze multi-modal data, including structured, semi-structured, and unstructured data. While Large Language Models (LLMs) have demonstrated promising capabilities, they still remain inadequate for multi-modal data analytics in terms of accuracy, efficiency, and freshness. First, current natural language (NL) or SQL-like query languages may struggle to precisely and comprehensively capture users' analytical intent. Second, relying on a single unified LLM to process diverse data modalities often leads to substantial inference overhead. Third, data stored in data lakes may be incomplete or outdated, making it essential to integrate external open-domain knowledge to generate timely and relevant analytics results. In this paper, we envision a new multi-modal data analytics system. Specifically, we propose a novel architecture built upon the Model Context Protocol (MCP), an emerging paradigm that enables LLMs to collaborate with knowledgeable agents. First, we define a semantic operator hierarchy tailored for querying multi-modal data in data lakes and develop an AI-agent-powered NL2Operator translator to bridge user intent and analytical execution. Next, we introduce an MCP-based execution framework, in which each MCP server hosts specialized foundation models optimized for specific data modalities. This design enhances both accuracy and efficiency, while supporting high scalability through modular deployment. Finally, we propose a updating mechanism by harnessing the deep research and machine unlearning techniques to refresh the data lakes and LLM knowledges, with the goal of balancing the data freshness and inference efficiency.
[ "cs.DB", "cs.AI" ]
# 1 INTRODUCTION NL2SQL translates natural language questions into SQL queries, making data analysis accessible to non-technical users and serving as a foundation for intelligent data applications like smart dashboards and visualizations. For instance, a business owner can simply ask, “What were last month’s total sales by product?” and the data application can use the NL2SQL system to generate the corresponding SQL, retrieve the relevant data, and summarize the results. Such possibilities make NL2SQL an important tool to explore. NL2SQL tools are increasingly being integrated into widely-used commercial database systems [1–3]. Usage of NL2SQL tools typically exhibit two behaviors. First, NL2SQL-generated queries are only a small fraction of the overall query workload that is executed on a database; most SQL queries are still either hand-written or machinegenerated according to application logic. Second, users of NL2SQL tools care not only about the accuracy of the output SQL query, but also the latency of invoking the NL2SQL tool. Recent improvements in NL2SQL technology has been propelled by the advent of large language models (LLMs). What sets LLMs apart from the previous language models is their capacity to generalize to unseen tasks. LLMs achieve this through in-context learning, where examples of the task completion and information relevant to the task are provided within their context. Using this context, the LLM generates the desired output. Capitalizing on these capabilities, LLMs consistently excel in various tasks, including NL2SQL, where they currently lead on two well-known NL2SQL benchmarks: Spider [37] and BIRD [13]. Since NL2SQL aims to output SQL queries that can execute on the user’s database, one key step in the NL2SQL pipeline is acquiring database-specific information (e.g., the names of tables which the SQL query should reference) to provide in the LLM’s context. Some commercial NL2SQL tools ask the user to manually specify certain information, such as the tables and views to be used in the SQL query [1] or the literals to use in filters [3]. On the other hand, NL2SQL techniques proposed in the research literature typically aim to automatically retrieve the relevant database-specific information without any human intervention [7, 20, 32]. However, one source of database-specific information is consistently ignored or underutilized by existing NL2SQL tools: the logs of past SQL queries which have executed on the database. These logs typically contain queries originating from various applications, including machine learning pipelines, ETL processes, and reporting tools, rather than being limited to those generated by NL2SQL interfaces. Databases often have repetitive workloads with similar queries in historical logs due to overlapping analyses driven by consistent business goals and standard reports. This leads to recurrences of similar patterns across queries, such as frequently-used join paths. As a result, SQL queries generated by NL2SQL systems often match or share components with logged queries. An Illustrative Example. We illustrate the potential benefit of using past query logs for NL2SQL with an example which is derived from the BIRD benchmark [13]. Fig. 1A depicts a chemistry database containing tables named atoms, cnt, and bond, among others. Notably, the cryptically-named cnt table is meant to “connect” the atoms and bond tables via their respective id fields. Assume that a number of SQL queries have already been run on this chemistry database, not necessarily generated from the NL2SQL interface. Given the user’s natural language question, an accurate NL2SQL system should produce the desired output SQL query, which joins atoms and bond via the cnt table. A typical NL2SQL pipeline (Fig. 1B) will invoke an LLM with a prompt that includes the user’s question and schema information from the database. However, due to the cryptic name of the cnt table, the LLM might erroneously choose an incorrect join path: in this example, it joins the atoms and bond tables on their respective id columns. Figuring out the correct join path based solely on table schemas is challenging for the LLM, especially if there is no join path information and tables have no conclusive names. To mitigate this lack of information, we can take advantage of past queries to augment the information provided in the prompt (Fig. 1C): suppose that the join path between the atoms, cnt, and bonds tables has been observed in multiple past queries, such as the query shown in Fig. 1A. We incorporate this common join path as a hint in the LLM prompt, which guides the LLM to generate the Input Desired Output Database User Question Schema Past Queries SQL Query table atoms {id,symbol, number,. select count (distinctbond.bondtype) select bond.bondtype Fid hetumeog bonds tabecnttoddtd f d table reactions vhere atoms.number>90 where atoms.symbol='He (B) Typical NL2SQLWorkflow (C) NL2SQLWorkflow Using Information From Past Queries LLM Prompt LLM Prompt Please generate SQLtoanswer the question: Find the type of bonds for pleastth LLM Output i Thliumatashemas: LLM Output LLM tr LLM tablebond... Hints from Hitmc where atoms.symbol='He PastQueriesatoms.idcntatid join bond on cnt.bondid = bond.id’ correct SQL query. In this example, past queries provide information about common query patterns, which implicitly indicate the appropriate usage of schema items which may otherwise be unclear from inspecting the database schema alone. Note that hints are not directives, i.e., they do not force the LLM to act a certain way, but rather provide the LLM with useful information which the LLM is allowed to ignore. Following this intuition, we introduce TailorSQL, a NL2SQL system that tailors itself to a given query workload by harnessing the information that is implicitly stored in past queries. TailorSQL improves both the accuracy and latency of NL2SQL with two core ideas. First, TailorSQL performs an offline analysis of the past query workload and extracts hints from past queries (e.g., the hint about the common join path in Fig. 1). Hints provide useful information for accurate NL2SQL translation which is missing from schema information alone. Individual hints are stored as text-based documents and can be retrieved into the LLM context if relevant for a given question. TailorSQL also stores schema information in documents. Second, TailorSQL introduces a workload-specialized framework for retrieving the relevant documents for a given user question, while ignoring irrelevant documents. Retrieval frameworks (see Section 2) are often necessary for real-world NL2SQL pipelines because LLMs are subject to a context limit, which is the maximum amount of input text the LLM can process. Database schemas often comprise thousands of tables and including all of them in the limited prompt context becomes infeasible. Even in cases where the entire schema fits into the prompt context, it is desirable to avoid adding irrelevant information to an LLM’s prompt, since doing so will incur higher latency and cost when invoking the LLM. A typical NL2SQL retrieval framework maps user questions and documents into an embedding space, ensuring the proximity of questions to relevant documents; then for a given question, the framework retrieves documents in order of decreasing embedding similarity until the LLM’s context is filled. TailorSQL’s retrieval framework improves upon the typical framework by using the past query workload to create fine-tuned document embeddings, which improves its ability to retrieve relevant documents while avoiding irrelevant documents. Furthermore, instead of using a single context size limit for all documents, TailorSQL performs an analysis over past queries to allocate a separate context limit for each class of document (i.e., schema vs. hint documents) in order to mitigate class imbalances when performing document retrieval. Finally, given that TailorSQL specializes to a specific workload, it may perform poorly if the workload distribution changes in the future. To mitigate such performance regressions, we introduce an abstention policy, which TailorSQL uses to dynamically decide whether to abstain from using query hints in the NL2SQL pipeline (i.e., whether to fall back to a generic NL2SQL pipeline that does not use TailorSQL’s proposed specializations). The abstention policy is based on a multi-armed bandit framework and incorporates user feedback into the decision-making process. We evaluate TailorSQL across multiple NL2SQL benchmarks, demonstrating up to $2 \times$ improvement in execution accuracy compared to baselines that do not take advantage of past queries. Additionally, for the same execution accuracy, TailorSQL improves SQL generation latency by $2 { - } 4 \times$ compared to the baselines. Furthermore, TailorSQL does not overfit to historical queries and exhibits robustness against shifts in query distribution. In summary, we make the following key contributions: • We present TailorSQL, a NL2SQL system that tailors itself to a given database by utilizing database-specific hints extracted from past queries to improve accuracy. We demonstrate how TailorSQL tailors its embedding and retrieval procedures to a given database, which improves SQL generation latency without sacrificing accuracy. We ensure that TailorSQL is robust to workload distribution shifts using a bandit-based policy to decide when to abstain from utilizing query hints. We present an evaluation of TailorSQL’s overall performance in terms of accuracy and latency as well as microbenchmarks on its individual components. # 2 PRELIMINARIES In this section we give background on retrieval models and retrievalaugmented generation, which is useful for understanding TailorSQL. Dense Retrieval Models and Document Retrieval: Dense retrieval models are a type of information retrieval method that efficiently matches text-based queries to text-based documents by encoding queries and documents into dense vectors (commonly referred to as embeddings) in a continuous vector space, then retrieving documents whose embeddings are similar to a given query embedding. The similarity between a query embedding $E _ { q }$ and a document vector $E _ { d }$ in the embedding space can be measured using various similarity metrics. One popular metric, which we use throughout this paper, is cosine similarity, defined as: $$ \mathrm { C o s S i m } ( E _ { q } , E _ { d } ) = \frac { E _ { q } \cdot E _ { d } } { \| E _ { q } \| \cdot \| E _ { d } \| } $$ where the numerator denotes the dot product between the query and document embeddings, and the denominator represent their respective Euclidean norms. Cosine similarity varies between $^ { - 1 }$ (denoting perfect dissimilarity) and 1 (denoting perfect similarity). Semantically-related queries and documents should ideally have similar embeddings. A common approach for generating embeddings is to use a pretrained sentence transformer such as SentenceBERT (SBERT) [22], which has been trained on large text corpora specifically for the purpose of generating semantically meaningful embeddings of sentences such that embeddings can be compared using cosine similarity. Here, “sentence” refers to arbitrary pieces of text that can contain multiple English sentences. Therefore, to avoid ambiguity, for the remainder of this paper we use the term document instead of “sentence.” Given a query, one can retrieve the top-K most similar documents by computing the similarity between the query embedding and all document embeddings. To enable efficient retrieval at query time, it is common to precompute document embeddings offline. Retrieval-Augmented Generation (RAG) is an approach that merges retrieval-based and generative models to improve text generation tasks. RAG comprises three main elements: a document store, retrieval model, and generative model. The document store contains relevant text-based documents, typically along with each document’s precomputed embedding. The retrieval model efficiently selects the most relevant documents based on a given query, employing techniques described above. Typically, the generative model is a pre-trained LLM, which produces contextually-relevant text based on the query and retrieved documents. When RAG is applied to NL2SQL, the document store typically contains documents with database-specific information, such as schema information (e.g., a document describing the column names and data types of a particular table) or database documentation (e.g., a document describing the supported SQL syntax for the database’s particular SQL dialect). # 3 TAILORSQL SYSTEM OVERVIEW TailorSQL is a NL2SQL system which accepts a natural language question as input from the user and produces a corresponding SQL query as output. Like other recent NL2SQL systems [7, 20, 32], TailorSQL employs an LLM by creating a prompt based on the input question, invoking the LLM using the prompt, and extracting the SQL output from the LLM’s response. TailorSQL’s core novelty compared to prior NL2SQL systems is its ability to specialize the NL2SQL pipeline to the user’s workload by taking advantage of historical query logs. We first describe the end-to-end workflow of TailorSQL’s NL2SQL pipeline, with a particular emphasis on how TailorSQL specializes each part of this pipeline using historical query logs. We then provide some further illustrative examples that intuitively showcase how historical query logs can improve NL2SQL accuracy. # 3.1 End-to-End Workflow TailorSQL’s NL2SQL system (shown in Fig. 2) is composed of an offline pipeline and an runtime pipeline. The offline pipeline preprocesses the information in the database and historical query logs so that the information can be retrieved for user questions. The offline pipeline must run before TailorSQL can start serving user questions, and it can be retriggered periodically (e.g., every night) so that TailorSQL is up to date with changes in the database schema or query workload patterns. The offline pipeline has two parts: Document generation: we compile information from the database which might be useful to include in the LLM prompt into individual pieces of text called documents. For example, prior NL2SQL techniques often generate one document for each table in the database, containing information such as the table’s name and the name and data type of its columns. TailorSQL specializes to the workload by additionally generating documents containing information about common patterns in the historical query workload. Section 4 describes the process in further detail. Document embedding generation: each document is identified by a fixed-length vector embedding. The embedding is meant to capture semantic information about the contents of the document, so that the embedding of a document is similar (in terms of cosine similarity; see Eq. (1)) to the embeddings of user questions which make use of the document and dissimilar to the embeddings of unrelated user questions. A common approach for generating document embeddings is to feed the content of the document through a pretrained embedding model, such as SBERT [22]. TailorSQL specializes to the workload by generating document embeddings that are a function of not only the document’s contents, but also of other related documents and related queries from the historical logs, which helps capture semantic information that is not present in the document’s contents alone. Section 5 describes the process in further detail. The runtime pipeline is triggered whenever the user asks a natural language question, and its output is a SQL query. The runtime pipeline has two components: Document retrieval: we compute the embedding of the user question, then compute the cosine similarity between the question embedding and the document embeddings which were generated offline. Prior NL2SQL techniques would then retrieve documents in decreasing order of similarity, until the prompt limit is saturated (i.e., until the total number of tokens in all retrieved documents surpasses the token limit of the LLM prompt). However, prior techniques do not distinguish between different types of documents. TailorSQL introduces a context allocator which performs an offline analysis to determine how much of the context to allocate for documents of each class in order to balance the precision and recall of document retrieval on the historical query workload. Section 6 describes the process in further detail. Figure 2: In an offline process, TailorSQL (A) generates schema and query hint documents based on information in the database, then (B) generates an embedding for each document using a combination of the document itself and several pieces of auxiliary information related to the document. (C) At runtime, we compute similarity between the user question’s embedding and document embeddings in order to retrieve the most relevant documents, up to a per-document-class token limit. (D) The prompt, which includes the question and information in retrieved documents, is used to invoke an LLM to generate the SQL query. Database Schema: user {userid,birthdate} Figure 3: Past filter predicates help interpret column values. • SQL generation: we assemble the retrieved documents into a prompt template. The same prompt template is used across all user questions. We invoke the LLM with the constructed prompt and parse the LLM response to yield the SQL query. TailorSQL does not perform any workload specialization for the SQL generation step. Indeed, prior NL2SQL systems make use of complex reasoning pipelines to perform SQL generation, e.g., by taking advantage of chain-of-thought and least-to-most prompting [20]. These improvements to SQL generation are orthogonal and com plementary to TailorSQL’s use of information from past queries. Specialized NL2SQL pipelines perform well when future questions have similar patterns as historical query logs, but can perform poorly otherwise. To avoid performance regressions, TailorSQL uses an abstention policy (Section 7) to dynamically decide when to fall back to using a non-specialized NL2SQL pipeline. user Query Hint userid birthdate 10201 05052005 Filterpredicateseen in3 past queries: \`substring(birthdate,! $\dot { , } 4 ) = \dot { } 2 0 0 3 ^ { \prime }$ 10202 30041996 10203 01011991 User Question: Find the number of customers born in 1991 LLM output w/o query hint LLM output w/ query hint select count(\*) select count(\*) from user fromuser wherebirthdate $\mathbf { \lambda } = \mathbf { \lambda }$ '1991' where substring(birthdate,5,4)= '1991' # 3.2 Illustrative Examples The example from Section 1 showed how past queries can help disambiguate cryptically-named tables by providing information about common join paths. We now present two further examples where past queries provide information beyond that which is already provided through the database schema. Filter Expressions: Filter expressions in past queries provide insights into the format of various columns, including literals within those columns, aiding the LLM in determining the appropriate filter Query Hint Database Schema: The following group-by was venue {id, name,state} seen in 6 past queries that read events {id,venueid,num_tickets} the venue table:\`group by v.id User Question: Find the name of the venue with the most events LLM output w/o query hint LLM output w/ query hint select select v.name,count(\*)as num_events v.name,count(\*)AS num_eventsi from venue v join event e from venue vjoin event e on v.id $\mathbf { \sigma } = \mathbf { \sigma }$ e.venueid onv.id $\mathbf { \Sigma } = \mathbf { \Sigma }$ e.venueid group by v.name group by v.id order by num_events desc order by num_events desc limit 1 limit 1 for a query. In Fig. 3, the user table contains user information such as ids and birth date. Notably, birthdate is stored as an 8-digit integer in a distinctive manner. Past queries use filter expressions of a specific form to accurately extract the year from birthdate values. Incorporating this query hint into the prompt assists the LLM in understanding the format of the birthdate column. Group-By Clauses: In Fig. 4, the id column serves as the primary key for the venue table, not the name column, since multiple venues may share the same name. Past aggregation queries tend to group by the id column, signifying that id refers to unique venues. If a user requests information about venues that requires aggregation, a query hint based on past group-by clauses will guide the LLM to aggregate over the id column instead of the name column. # 4 DOCUMENT STORE TailorSQL uses information about the database schema and past query workload to help generate accurate SQL queries for user questions. TailorSQL stores this information in two broad classes of documents: (1) schema documents, which capture information about the database schema, and (2) query hint documents, which capture information from the past query workload. # 4.1 Schema Documents Schema documents capture information about the database schema. Similar to prior NL2SQL techniques [20, 32], TailorSQL generates a document for each table in the database. By default, the schema document will contain the CREATE TABLE SQL statement which is used to create the table, which typically captures the following information: (1) the table’s name, (2), the name, data type, default value, nullability, and other properties for each column, and (3) table properties such as the primary key and any foreign keys. All of this information helps the LLM reason about the correct SQL for a user question. Note that the exact contents of the schema document will vary depending on the database management system. For example, some multi-node database systems define distribution keys for each table, which do not exist for single-node systems. Indeed, we found that certainadditionalinformation beyond theCREATE TABLESQL statement are useful for TailorSQL, which we describe further in Section 8.2. In general, the exact contents of the schema document are orthogonal to the core idea of TailorSQL, and TailorSQL’s procedure will work with whatever schema documents are generated for a given DBMS. # 4.2 Query Hint Documents Unlike schema documents, which are standard in prior NL2SQL techniques, query hint documents are a novel class of documents introduced by TailorSQL. Query hint documents capture information about common query structures in the past workload. Since queries often repeat in analytic workloads, including information about past query structures should help the LLM generate accurate SQL for future user questions. There are many options for the structure of query hint documents, which vary on a tradeoff space between information content and information density: Create a hint document for each query observed in the past workload, containing the SQL text of the query. This provides the maximum amount of information to the LLM. However, there are several disadvantages to this approach: (1) SQL queries can have very long text [30, 31], which quickly exhausts the context space. (2) SQL queries often have repetitive text, which means that hint documents would contain redundant information. For exact repeats of the same SQL query text, it is simple to deduplicate documents, but queries often do not repeat exactly but rather have repeating subcomponents (e.g., repeating filter predicates [30]). (3) SQL query text often contains boilerplate text which does not provide any useful information. To alleviate the redundancy described above, an alternative is to create a hint document for each past query template, where a template is created by removing literals from the SQL text, and then to store the literals separately. This approach might help reduce information redundancy for dashboarding and reporting workloads where the same queries are issued repeatedly with differing literals, but it does not work as well for ad-hoc query workloads where distinct templates are not as prevalent. • Break down each past SQL query into subcomponents, where each subcomponent roughly corresponds to a different SQL clause, and generate a hint for each distinct subcomponent. The intuition is that queries in the past workload, despite being distinct overall, share significant subcomponents like join paths and filter predicates. Although this approach decreases information redundancy in documents, its disadvantage is that by extracting only a subset of information in each query, we lose some information content. TailorSQL uses the approach of breaking down past queries into subcomponents, and in Section 8.2 we describe the specific content of hint documents used in our evaluation. However, we believe the workload-specialization techniques described in this paper are helpful regardless of the exact format of hint documents. Deciding on the content of hint documents, similar to deciding on the content of schema documents and LLM prompt engineering more generally, is more an art than a science, and further optimization to the hint document format is left to future research. # 5 DOCUMENT EMBEDDING GENERATION In this section, we describe how TailorSQL generates an embedding for each of the documents in its document store. One simple workload-agnostic approach is to feed the contents of a document through an embedding model such as SBERT [22], which produces an embedding that captures the semantics of the document contents. For the remainder of this section, we refer to a document’s embedding produced by feeding its contents through an embedding model as a raw document embedding. However, TailorSQL generates tailored embeddings in a manner that makes use of the past query workload, which we show in Section 8 to perform better than workload-agnostic raw SBERT embeddings. For a given user question, we define a relevant document as one which is helpful for answering the question. A schema document is relevant to a question if the table described in the document is used in the SQL query that answers the question. Similarly, a query hint document is relevant to a question if any content in the hint document is used in the SQL query that answers the question. TailorSQL’s embedding generation procedure takes advantage of the following intuition: the ideal document embedding is one that maximizes the similarity between the document embedding and the embeddings of future relevant user questions (and also minimizes its similarity to the embeddings of future irrelevant questions). We first describe how TailorSQL formalizes this intuition into an optimization objective which can be used to quantify the goodness of a document embedding. We then describe TailorSQL’s procedure for generating document embeddings to optimize that objective. # 5.1 Optimization Objective We define embedding similarity as the cosine similarity (see Eq. (1)) between two embeddings. For a given pair of question embedding $\mathbf { E } _ { \mathrm { Q } }$ and document embedding $\mathbf { E } _ { \mathrm { d o c } }$ , we define cosine loss as: $$ \begin{array}{c} \mathrm { l o s s } ( \mathrm { E _ { \mathrm { Q } } , E _ { \mathrm { d o c } } ) = \left\{ \begin{array} { l l } { \mathrm { 1 - c o s ( E _ { \mathrm { Q } } , E _ { \mathrm { d o c } } ) } } & { \mathrm { i f ~ d o c ~ r e l e v a n t ~ f o r ~ \mathrm { Q } ~ } } \\ { \mathrm { m a x ( 0 , c o s ( E _ { \mathrm { Q } } , E _ { \mathrm { d o c } } ) } } & { \mathrm { i f ~ d o c ~ i r r e l e v a n t ~ f o r ~ \mathrm { Q } ~ } } \end{array} \right. } \end{array} $$ Loss is high when either (1) the document is relevant to the question but the embeddings are not similar, or (2) the document is not relevant to the question but the embeddings are similar. One challenge is that we do not know the exact questions that the user will ask in the future, so it is difficult to directly optimize for $\log ( \mathbf { E } _ { \mathrm { Q } } , \mathbf { E } _ { \mathrm { d o c } } )$ . TailorSQL tackles this challenge by taking advantage of the observation that in a stable workload, past SQL queries are likely to be similar to the queries that answer future user questions. Therefore, TailorSQL generates a synthetic question workload using the past SQL queries: for each past query, TailorSQL generates a synthetic question which, when given to a NL2SQL system, would yield that particular past query as the answer. TailorSQL employs an LLM model to generate these synthetic questions. The model is prompted with the SQL query and the schema of the tables involved in the query, and is tasked with generating a potential user question. Given this synthetic question workload, our objective is to find an embedding for every document in order to minimize the total cosine loss for all documents in the document store $\mathbf { D }$ over all queries in the synthetic query workload $\mathbf { Q }$ : $$ \operatorname* { m i n } _ { \mathbf { d o c } \in \mathbf { D } \mathrm { { s y n t h Q } \in Q } } \mathrm { { l o s s } ( \mathbf { E } _ { \mathrm { { s y n t h Q } } } , \mathbf { E } _ { \mathrm { { d o c } } } ) } $$ # 5.2 Optimization Procedure Given this optimization objective, one possible strategy for determining document embeddings is to independently create an embedding for each document that minimizes cosine loss over the synthetic question workload. However, we found that this leads to overfitting. For example, in a degenerate case where a document is only relevant for a single synthetic question, then we would set the document embedding to be the same as the synthetic question embedding; clearly, this would not generalize to future questions which make use of the document, but which are not the same as the synthetic question. Therefore, instead of allowing arbitrary embeddings for each document, in TailorSQL we impose a structure for document embeddings. For each document, we generate a number of proxy embeddings, which are embeddings that should be similar to the relevant question embeddings and which capture information that may not be present in the content of the document itself: • We identify all the past queries which this document is relevant for. We use the embedding model (e.g., SBERT) to generate an embedding of each SQL query based on the query text, then average the embeddings across all queries: $\mathbf { E } _ { \mathrm { S Q L } }$ . We include this proxy embedding because documents are expected to be similar to the queries which use it, and SQL query information is not entirely present in documents and so would not be accounted for in the raw document embedding. • For each of the relevant past queries, we prompt an LLM to generate a synthetic user question, and take the average embedding of the synthetic user questions: EsynthQ. We naturally want document embeddings to be similar to the embeddings of questions which might use it. • We identify co-occurring documents, which are other documents that are relevant to at least one of the queries that the current document is relevant to. We average the raw embeddings of all cooccurring documents: $\mathbf { E } _ { \mathrm { c o } }$ -occur. We include this proxy embedding because this information is useful for documents with obscure meanings, as co-occurring documents with clearer semantics can help improve their understanding. For example, in Fig. 1, atom and bond have clearer semantics than the cnt table. For a given document, its embedding is the weighted sum of each of its proxy embeddings, along with its raw embedding, $\mathbf { E } _ { \mathrm { r a w } }$ : $$ \mathbf { E } _ { \mathrm { d o c } } = \left( w _ { 1 } \cdot \mathbf { E } _ { \mathrm { r a w } } + w _ { 2 } \cdot \mathbf { E } _ { \mathrm { c o - o c c u r } } + w _ { 3 } \cdot \mathbf { E } _ { \mathrm { S Q L } } + w _ { 4 } \cdot \mathbf { E } _ { \mathrm { s y n t h Q } } \right) $$ We use the same weights for all documents, i.e., the weights must be optimized once per workload, not once per document. Using the same weights for the entire workload encourages generalization. For a given workload, we find the weights that minimize the optimization objective (Eq. (2)) using gradient descent. Proxy embeddings essentially extend the information content of the document itself. It is as if, instead of generating an embedding for a document based purely on the contents of the document itself, we are generating an embedding based on an augmented document that also includes information about relevant SQL queries and co-occuring documents. Instead of creating proxy embeddings, we could have achieved a similar effect by generating a “virtual” augmented document (which includes the original document contents along with the text of co-occurring documents, relevant SQL queries and synthetic questions) and using SBERT to generate an embedding of this augmented document. However, we found that by using a weighted sum, we have more fine-grained control over the relative importance of each piece of information when constructing the embedding, which produced better embeddings. # 6 DOCUMENT RETRIEVAL When the user asks a question to TailorSQL, we first convert the question into an embedding by feeding the question contents through the embedding model (e.g., SBERT). We then retrieve relevant documents from the document store and put their contents into the LLM prompt. One simple workload-agnostic approach is to retrieve documents in descending order of similarity between the question embedding and document embedding, until the LLM prompt context is filled. However, there are several drawbacks to this simple retrieval approach, related to the existence of multiple document classes: The optimal retrieval recall and precision may differ for each document class, where recall is defined as the fraction of relevant documents that are retrieved and precision is defined as the fraction of retrieved documents that are relevant. For example, it is intuitively more critical to have high recall for schema documents than for hint documents: if a relevant schema document is not present in the context, then TailorSQL will have difficulty generating the correct SQL because it does not know what table or column name to use, but if a relevant hint is not present in the context, TailorSQL might still generate the correct SQL. The number of documents in each class may be vastly different. By retrieving documents in a class-agnostic manner, we may end up with many more of one document class than the other, which may not be desirable. • The scale of embedding similarities for each document class may be different. For example, query hint documents may naturally have embeddings that are less similar to question embeddings than schema documents, due to the difference in information content of the two documents. Some of these scale differences are mitigated due to the document embedding generation process (Section 5), but there are nonetheless still effects due to the inclusion of raw document embeddings in the weighted sum that produces the tailored document embeddings. To address these drawbacks, TailorSQL performs document retrieval in a workload-adaptive manner. TailorSQL performs an offline analysis over the past query workload to determine a context allocation over the document classes, i.e., a way to split up the number of tokens in the context among the document classes. When performing document retrieval for a given user question, we fill the allocated context for each document class independently. That is, for each document class, we retrieve documents of that class in descending order of similarity until the class context limit is reached. TailorSQL determines context allocation using Bayesian optimization. Specifically, we select a sample of past queries and use an LLM to generate a synthetic question for each query. The optimization objective is to identify a context allocation that maximizes TailorSQL’s accuracy on the synthetic workload while adhering to a user-specified token limit. As we will describe in Section 8.2, TailorSQL uses two classes of schema documents: table documents and column documents. Thus, the optimization is expressed as: $$ \begin{array} { r l r } & { 0 \le t _ { \mathrm { t b l } } \le T , \quad 0 \le t _ { \mathrm { c o l } } \le T , } & { 0 \le t _ { \mathrm { h i n t } } \le T , } \\ & { t _ { \mathrm { t b l } } + t _ { \mathrm { c o l } } + t _ { \mathrm { h i n t } } \le T } \end{array} $$ Here, $t _ { \mathrm { t b l } } , t _ { \mathrm { c o l } } .$ , and $t _ { \mathrm { h i n t } }$ represent the number of tokens allocated to table documents, column documents, and hint documents, respectively, subject to the total token constraint 𝑇 . However, a naive Bayesian optimization implementation that samples configurations from $[ 0 , T ] \times [ 0 , T ] \times [ 0 , T ]$ would waste time exploring configurations that do not satisfy the token limit constraint. To address this, we reparameterize the problem by introducing variables that effectively eliminate the constraint: $p$ : Fraction of the token limit that is allocated. The remaining tokens are not allocated to any document class and remain unused. $p _ { \mathrm { t b l } } \colon$ Fraction of the allocated tokens that is allocated to tables. $p _ { \mathrm { c o l } }$ : Fraction of the remaining allocated tokens (after table allocation) that is allocated to columns. Using these new variables, the token allocations are expressed as: $$ \begin{array} { r l } & { { t } _ { \mathrm { { t b l } } } = { T } \cdot { \dot { p } } \cdot { p } _ { \mathrm { { t b l } } } } \\ & { { t } _ { \mathrm { { c o l } } } = { T } \cdot { p } \cdot \left( 1 - { p } _ { \mathrm { { t b l } } } \right) \cdot { p } _ { \mathrm { { c o l } } } } \\ & { { t } _ { \mathrm { { h i n t } } } = { T } \cdot { p } \cdot \left( 1 - { p } _ { \mathrm { { t b l } } } \right) \cdot \left( 1 - { p } _ { \mathrm { { c o l } } } \right) } \end{array} $$ By reparameterizing the token allocation in this manner, the original constraint $t _ { \mathrm { t b l } } + t _ { \mathrm { c o l } } + t _ { \mathrm { h i n t } } \le T$ is naturally satisfied, as all components are expressed as proportions of the total token limit. The reformulated optimization problem is now given by: $$ \begin{array} { r l } & { \mathrm { m a x i m i z e } \qquad \mathrm { A c c u r a c y } ( \boldsymbol { p } , \boldsymbol { \hat { p } _ { \mathrm { t b l } } } , \boldsymbol { \hat { p } _ { \mathrm { c o l } } } ) } \\ & { \mathrm { s u b j e c t ~ t o } \qquad 0 < \boldsymbol { \hat { p } } \le 1 , \quad 0 \le \boldsymbol { \hat { p } _ { \mathrm { t b l } } } \le 1 , \quad 0 \le \boldsymbol { \hat { p } _ { \mathrm { c o l } } } \le 1 } \end{array} $$ This reformulation ensures a clean optimization space, making it particularly well-suited for standard Bayesian optimization. We use Bayesian optimization, instead of a simpler method such as gradient descent, to determine context allocation for several reasons: (1) Bayesian optimization is more sample-efficient than gradient descent (i.e., it takes fewer iterations) which is ideal for cases where evaluations of the optimization function are expensive. In our case, evaluating the objective function involves running the NL2SQL pipeline and invoking the LLM for each synthetic question, which is indeed expensive. (2) There are interactions between different document classes which makes the objective function surface complex and multi-modal. For example, increasing the context allocation for a given document class is not always desirable: although larger context improves retrieval recall, it may degrade precision, and a high concentration of irrelevant documents may in fact distract the LLM [14]. Bayesian optimization is better at finding global optima, whereas gradient descent may get stuck in local optima. # 7 ABSTENTION POLICY TailorSQL specializes its NL2SQL workflow for a given query workload. However, its performance may degrade when the workload characteristics change (e.g., tables which were commonly queried in the past are now used rarely), since its specializations are no longer aligned with the user questions. In this section, we describe TailorSQL’s abstention policy, which we use to decide whether user questions no longer align with the workload that TailorSQL was specialized for, and therefore to instead answer user questions using a generic, non-workload-specialized NL2SQL pipeline. TailorSQL’s abstention policy relies on the existence of two NL2SQL pipelines: TailorSQL’s specialized runtime pipeline (i.e., Fig. 2C-D), and a generic runtime pipeline which does not use tailored embeddings or context allocations for document retrieval. Instead, the generic pipeline only retrieves schema documents by comparing similarity between the question embedding and raw document embeddings. Note that if we already ran TailorSQL’s offline pipeline, there is no additional overhead for supporting a generic runtime pipeline: the schema documents and raw document embeddings needed for the generic pipeline already exist. Conceptually, if a user question is similar to the past query workload, then we should use the specialized pipeline, and otherwise we should use the generic pipeline. Instead of directly comparing the similarity of user questions against the past query workload, TailorSQL uses runtime feedback to inform its abstention policy. We assume that after TailorSQL answers a user question with a SQL query, the user gives a binary signal (e.g., thumbs up or thumbs down) about whether they find the answer correct or useful. TailorSQL uses a multi-armed bandit as its abstention policy: for each incoming question, TailorSQL chooses one of the two pipelines to run. After running, we collect the binary feedback from the user. We maintain the average feedback over all past questions that are run on each pipeline, where a thumbs up maps to 1 and a thumbs down maps to 0. TailorSQL chooses which pipeline to run using an $\epsilon$ -greedy strategy: with probability $\epsilon$ we choose a random pipeline, and with probability $1 - \epsilon$ we choose the pipeline with the higher average historical feedback. TailorSQL’s abstention policy has two aspects which are different from a typical $\epsilon$ -greedy strategy: (1) We maintain a sliding window of feedback, so that feedback that was given earlier than the window’s start boundary are not considered when computing the average feedback for a pipeline. We do not want to maintain stale feedback, since we are only concerned with determining which pipeline is better for the current workload. (2) We delete all collected feedback whenever we retrigger TailorSQL’s offline pipeline, i.e., whenever we regenerate documents and embeddings (see Section 3.1). Alternative Formulations: Instead of a multi-armed bandit, we also considered using a contextual bandit formulation. Intuitively, the multi-armed bandit determines which pipeline the current workload should be run on, whereas a contextual bandit determines which pipeline a given question (i.e., the decision “context”) should be run on. Contextual bandits may do better than a multi-armed bandit at routing questions to the best pipeline if the current workload is composed of a mix of questions that are similar and dissimilar to the past queries. However, contextual bandits require many more feedback points to learn the optimal decision strategy. We were not able to justify such a long learning process through the benchmarks in our evaluation due to the low number of questions (Section 8), though contextual bandits may provide more benefit in other benchmarks or in real-world settings. Instead of a bandit approach, we also considered using a supervised learning approach. However, this requires a different feedback mechanism, which leads to a different user experience. To train a binary classifier that decides whether a question should be processed using the generic or specialized pipeline, we would need to first collect training data by taking each question, passing it through each pipeline to produce two different responses, and asking the user to choose the better one. This type of feedback is more informative than the simple yes/no feedback used in our bandit approach, but it places a greater mental load on users. # 8 EXPERIMENTAL EVALUATION We first describe the experimental setup and then present an in-depth experimental study that shows TailorSQL’s performance on three NL2SQL benchmarks. Overall, the evaluation demonstrates that: TailorSQL achieves $1 0 \mathrm { - } 2 2 \%$ higher end-to-end SQL generation accuracy compared to other baselines while utilizing $2 { - } 1 5 \times$ smaller prompts for the same accuracy (Section 8.3). Query hint documents alone enhance performance by $4 \%$ , but TailorSQL achieves greater accuracy and latency improvement through tailored embeddings and context allocation (Section 8.4.1). In case of workload changes, TailorSQL’s abstention policy adapts to the workload change to maintain high accuracy (Section 8.5). TailorSQL’s workload specialization techniques are complementary to the methods in state-of-the-art NL2SQL systems and can improve their performance (Section 8.6). # 8.1 Experimental Setup Datasets: We use three NL2SQL benchmarks to test TailorSQL. Bird-Union combines tables from the databases in the dev set of the BIRD benchmark [13] into a single database containing 71 tables and 1200 NL2SQL question-SQL pairs2. By using one combined database, we simulate real-world scenarios where data is not so cleanly separated into distinct databases for every topic. Spider-Union combines tables from the databases in the dev set of the SPIDER benchmark [37] into a single database containing 221 tables and 480 NL2SQL question-SQL pairs. • FIBEN [25] contains a single database containing 152 tables and 300 NL2SQL question-SQL pairs. In general, Bird-Union and FIBEN pose a greater challenge than Spider-Union due to having more semantically intricate table names, column names, and questions. Workloads: For each benchmark, we need to split the question-SQL pairs into two sets: one set to use for simulating historical query logs (i.e., the “training” set) and the other to use for simulating future user questions (i.e., the “test” set). We employ two types of splits: Random Split: In this case, question-SQL pairs are randomly assigned to either the query log or test set with equal probability. As a result, the test set mirrors the same distribution as the query logs. This case is ideal for showcasing the benefits of TailorSQL. • Disjoint Split: In this case, question-SQL pairs are divided in such a way that the SQL queries in the query log and test set do not access the same tables. Consequently, the test set exhibits a completely different distribution from the query logs. This case highlights workload drift when user questions have no similar counterparts in the query log. This is similar to the train/dev/test splits provided by the BIRD and SPIDER benchmarks, where the databases observed in the train set are completely disjoint from those in the dev or test sets. All experiments have half the question-SQL pairs in the query log and the other half in the test set. Unless specified otherwise, we run experiments using Random Split. Baselines: The leading techniques on the BIRD and Spider benchmarks, such as [7, 8, 19, 20, 27, 32], focus more on obtaining the best answer using superior LLMs or better decomposition or prompting techniques (e.g., least-to-most prompting). On the other hand, the goal of our evaluation is to assess how incorporating information from past query logs influences NL2SQL performance. The techniques described in this paper are orthogonal and complementary to the techniques on the benchmark leaderboards and can be combined for further improvement in accuracy (see Section 8.6). Therefore, to isolate the performance effects of using query log information and avoid confounding factors such as LLM model and prompting techniques, in our evaluation we adhere to using the same LLM (Claude 3 Haiku 1.2 [10]) and perform SQL generation via a single call to the LLM for TailorSQL and for all baselines. We use two baselines: • SBERT: In this baseline, we use SBERT [22] (specifically, the all-MiniLM-L6-v2 model [4]) to generate embeddings for user questions and for documents. This baseline does not use any workload-related specializations, i.e., it does not store query hint documents, does not generate workload-tailored embeddings, and does not perform an offline analysis for prompt context allocation. For a given user question, this baseline retrieves documents into the LLM prompt in order of decreasing embedding similarity until hitting a specified prompt token limit. BM25: In this baseline, we use BM25 [23], a lexical retrieval method, to retrieve documents. BM25 is often used for retrieval over long documents, where SBERT-based models might not perform as well. All other aspects of the baseline are the same as SBERT. Metrics: We evaluate the accuracy of each NL2SQL approach based on execution accuracy (EX), which measures the fraction of questionSQL pairs for which the execution results of the NL2SQL-generated SQL query and the ground-truth SQL query matches. Following the precedent of earlier papers, we measure improvement in accuracy in absolute numbers instead of relative numbers. For example, if execution accuracy increases from $2 0 \%$ to $6 0 \%$ , we refer to a $4 0 \%$ improvement in accuracy instead of a $3 \times$ improvement. We also evaluate the latency of each NL2SQL approach, which includes the latency of retrieving relevant documents to create an LLM prompt and the latency of invoking the LLM using the prompt. We repeat each experiment 5 times and we report the median value of each metric being measured in the experiment. # 8.2 Implementation Details TailorSQL uses SBERT as its embedding model for generating question embeddings and raw document embeddings. As described in Section 4, TailorSQL’s fundamental contributions are not the exact format of documents. Here, we describe the document format we used for our evaluation, but alternative formats may work better for other workloads. 8.2.1 Schema Documents. TailorSQL uses two classes of schema documents: table documents and column documents. Each column document contains information specific to a given column, including the column name, table name, and the ten most commonly-occurring column values. We separated table documents from column documents because we found that including samples for every column in a unified schema document results in very long documents which quickly use the limited context space in the LLM’s prompt. Furthermore, if a question only requires a subset of columns from a table, it is unnecessary to retrieve the entire table document. 8.2.2 Query Hint Documents. TailorSQL generates the following types of query hints from past queries: Join path hints: for each query, we extract the set of tables which are scanned and the join conditions between the tables. • Filter hints: for each query, we construct a document for each filter. This includes the names of the tables whose columns are referenced in that filter. • Group-by hints: we construct one document for the entire groupby condition, including the names of the tables whose columns are referenced in the group-by. These query hints cover the types of SQL clauses which we most commonly observed in our benchmarks and which appeared most useful to TailorSQL, but they are not exhaustive. We believe that TailorSQL can be easily extended to generate hints for other SQL clauses if needed. If a query hint is found in multiple past queries, we merge the query hint documents and add a counter for the number of times this hint has been observed in the past. # 8.3 End-to-End Evaluation The objective of this experiment is to demonstrate that TailorSQL can enhance performance by leveraging past queries. In Fig. 5, the first row shows the SQL execution accuracy of TailorSQL and the baselines across three benchmarks. Additionally, we include a constrained TailorSQL baseline, where we artificially limit the context size for documents to 1K tokens3, in order to evaluate how TailorSQL performs in terms of accuracy if the user requires low latency. For the baselines, we vary the limit on the LLM prompt context size. For TailorSQL, only a single point is shown since the context allocator automatically determines the context size for each document class. Accuracy for all techniques varies across benchmarks, reflecting the varying levels of benchmark difficulty. TailorSQL consistently achieves the highest accuracy across all benchmarks compared to the baselines, while BM25 consistently underperforms. Specifically, TailorSQL achieves $1 2 . 5 \%$ , $1 0 . 9 \%$ , and $2 2 . 7 \%$ higher accuracy than the next best baseline on the three benchmarks, respectively. TailorSQL achieves more significant accuracy improvements on FIBEN than the other benchmarks due to FIBEN’s complex queries (e.g., nested queries, multi-table joins) and cryptic table names, which especially benefit from query hints and tailored embeddings. Among the two baselines, the BM25 baseline typically requires more prompt tokens than the SBERT baseline to achieve its best accuracy. This is because BM25 is worse at document retrieval than Figure 5: TailorSQL achieves $1 2 . 5 \%$ , $1 0 . 9 \%$ and $2 2 . 7 \%$ higher match execution accuracy compared to the next best baseline (SBERT). TailorSQL achieves the greatest accuracy gains on FIBEN due to the benchmark’s complex queries and cryptically-named schema items, which benefit from query hints. Additionally, to achieve the same accuracy, TailorSQL (constrained) uses $\mathbf { 2 - 1 5 \times }$ times fewer tokens and incurs lower latency than the baselines. SBERT and therefore requires more prompt tokens to achieve a sufficiently high document recall to generate accurate SQL queries. Note that for SBERT, increasing the prompt size only improves accuracy up to a certain point, after which further increases in prompt size cause accuracy to drop. This implies that as prompt size increases past the optimal point, the additional documents retrieved into the prompt have an increasing likelihood of being irrelevant, which degrades retrieval precision more than it improves recall. This observation is consistent with findings that adding irrelevant information to LLMs can reduce accuracy [26]. The SBERT baseline achieves its best accuracy with 2.3K, 21.5K, and 2.5K prompt tokens for the Spider-Union, Bird-Union, and FIBEN benchmarks, respectively. In contrast, TailorSQL matches or exceeds SBERT’s best accuracy using at most 1.3K tokens4 for the same benchmarks, as shown by the constrained TailorSQL baseline. As a result, TailorSQL achieves the same accuracy as SBERT while using $1 . 8 \times$ , $1 5 \times \mathrm { . }$ , and $1 . 9 \times$ fewer tokens. The second row of Fig. 5 depicts the latency of invoking the NL2SQL pipeline for a user question, which includes both document retrieval and SQL generation. Latency is primarily influenced by the latency of the large language model (LLM) calls, which depend on factors such as the number of input and output tokens. Generally, latency increases as the number of input prompt tokens grows. Compared to the SBERT baseline, TailorSQL incurs higher latency because it uses more input prompt tokens than the SBERT baseline, since TailorSQL includes both schema and hint documents in its prompts, whereas the SBERT baseline only includes schema documents, though this effect is mitigated using TailorSQL’s tailored embeddings for more effective document retrieval. However, if the user is concerned with latency, they can constrain TailorSQL to use a smaller prompt size, which achieves much lower latency than the SBERT baseline. Table 1: TailorSQL Ablation Study on Bird-Union Benchmark # 8.4 Ablation Study We perform an ablation study for the components of TailorSQL’s end-to-end workflow, as well as for the components of its workloadtailored embeddings. 8.4.1 End-to-End Workflow. In this section, we analyze the performance impact of each key technique employed by TailorSQL. Table 1 presents the impact of disabling each of TailorSQL’s components, when evaluated on accuracy on Bird-Union. Disabling tailored embeddings results in a noticeable increase in context size and a corresponding rise in latency. This is because less effective embeddings lead to a larger context being required to gather all the necessary information, resulting in an accuracy drop due to the reduced precision in identifying relevant documents. When the context allocator is disabled, a large fixed context is used to maintain high recall. However, this results in high latency (since LLM invocation latency is correlated with the size of the input prompt) and reduced precision when retrieving relevant documents, negatively impacting accuracy. Figure 6: TailorSQL’s tailored document embeddings achieve higher top-1 and top-5 table document recall compared to baselines. Finetuned SBERT achieves better recall than TailorSQL but requires higher training time (shown above each bar). Top-1Table Recall Top-5TableRecall Co-Occuring Tables Co-Occuring Tables 1 Past Queries PastQueries PastQuestions Past Questions TailorSQL TailorSQL N文 中 N·区 Spider-Union Bird-Union FIBEN Spider-Union Bird-Union FIBEN Figure 8: All questions before the vertical dotted line are similar to past queries, while all questions afterwards are dissimilar to past queries. TailorSQL’s bandit-based abstention policy is able to select the better pipeline in case of this workload shift, and performs better than purely using TailorSQL’s workload-tailored pipeline or a generic SBERT-based pipeline on all questions. Figure 7: Improvement in recall achieved by including each individual factor into TailorSQL’s tailored embeddings, while ignoring other factors. Recall improves the most when all factors are used (denoted by the bar marked “TailorSQL”). Lastly, disabling query hints causes an increase in context usage and a reduction in accuracy. Without query hints, the system lacks crucial information that assists in SQL generation, forcing the retrieval of more schema documents, which contributes to the higher context size. In summary, each component plays a critical role in maintaining optimal accuracy and latency for TailorSQL. 8.4.2 Tailored Embeddings. This section evaluates the effectiveness of TailorSQL’s tailored embeddings for documents (see Section 5). The first takeaway that TailorSQL’s tailored embeddings perform better than using SBERT and BM25 to generate document embeddings. In Fig. 6, each bar represents the table recall for strategies selecting the top-K relevant tables for a user question. Top-K document recall measures the fraction of user questions where the top-K most similar documents include all relevant documents. TailorSQL consistently outperforms vanilla SBERT embeddings by specializing document embeddings based on the past query workload, leading to significant accuracy improvements (Section 8.3). BM25 performs the worst, consistent with its low accuracy in Fig. 5. The second takeaway is that TailorSQL’s tailored embeddings achieve worse recall than the approach of fine-tuning SBERT and using the fine-tuned model to generate embeddings. However, finetuning requires up to 2.5 hours of additional training time (as shown in Fig. 6), whereas TailorSQL’s tailored embeddings require less than one minute of training on a CPU. Furthermore, fine-tuned models require additional storage space and may not generalize as well as TailorSQL’s purposefully underparameterized model. While finetuned embedding models is a great approach when maximum recall is desired, we decide to use TailorSQL’s method of learned embeddings to minimize training time and storage overhead. We now explore the effect of each factor that contributes to TailorSQL’s tailored document embeddings. Fig. 7 illustrates the impact on recall when integrating each of the three factors individually into the tailored embedding, along with raw SBERT document embeddings, while ignoring the two other factors. Among the three factors, embeddings based on synthetic past questions demonstrate the most substantial improvement across all benchmarks, while the incorporation of co-occurring documents contributes the least improvement. # 8.5 Robustness against Workload Drift In this section, we evaluate whether TailorSQL’s abstention policy is able to correctly select whether user questions should run on TailorSQL’s workload-tailored pipeline or on a generic NL2SQL pipeline. Intuitively, we expect the policy to choose to run on TailorSQL’s pipeline if the user question follows similar patterns as the past query workload, and to choose to run the generic pipeline otherwise. In particular, as the user question characteristics drift over time, we expect the policy to dynamically switch from favoring TailorSQL’s pipeline to favoring a generic pipeline. Fig. 8 illustrates the behavior of the abstention policy under workload drift, which occurs over the course of many user questions. The first half of user questions follows the Random Split workload (see Section 8.1), which means that the user questions are similar to the past queries; the second half of user questions follows the Disjoint Split workload, which means that the user questions are dissimilar to the past queries. The plot shows the reward obtained by the abstention policy, which represents the execution match accuracy over a sliding window of 100 questions, as user questions are submitted. A $9 5 \%$ confidence interval is plotted around the abstention policy’s reward to indicate variability. For comparison, we include the performance of only using TailorSQL’s pipeline for every question and of only using a generic pipeline, equivalent to the SBERT baseline. As expected, a policy of only using TailorSQL’s pipeline performs better during the first half of the workload, while the policy of only using the generic SBERT baseline performs better in the second half. Initially, the abstention policy’s bandit algorithm explores both pipelines, quickly converging to TailorSQL’s pipeline in the first half. Upon detecting the distribution shift at the midpoint, the accuracy briefly drops, but the bandit algorithm soon adapts, selecting the SBERT pipeline as the optimal choice for the second half. Thus, the bandit-based abstention policy ensures adaptation to workload drift. Table 2: (Spider-Union) TailorSQL improves state-of-the-art NL2SQL systems (DIN-SQL and MAC-SQL) by using past queries. Table 3: (Bird-Union) TailorSQL improves state-of-the-art NL2SQL systems (DIN-SQL and MAC-SQL) by using past queries. # 8.6 Impact on other NL2SQL methods We now present empirical evidence that leveraging past queries to improve NL2SQL generation is beneficial for existing NL2SQL systems. State-of-the-art NL2SQL methods, such as those on the Spider and BIRD benchmarks, focus on enhancing SQL generation through advanced reasoning techniques like Chain-of-Thought [36] and Self-Consistency [34]. The core innovation of TailorSQL lies in its use of past queries, which is complementary and orthogonal to these techniques, allowing TailorSQL to augment their performance. To demonstrate this, we modified two well-known NL2SQL systems, DIN-SQL [20] and MAC-SQL [32], to incorporate TailorSQL’s provided prompt as the initial prompt, and compared this against the use of SBERT-based retrieval for the initial prompt. Tables 2 and 3 show that using TailorSQL as the initial prompt significantly improves accuracy and reduces SQL generation latency compared to SBERT-based retrieval across both benchmarks and NL2SQL systems. Note that the accuracy of DIN-SQL and MAC-SQL in our results differs from the values reported on the public Spider and BIRD leaderboards due to modifications in our setup—we introduced changes to the benchmarks by combining each benchmark’s databases into one Union database and excluding evidence from the BIRD benchmark, and we employed a different LLM than the one originally used to tune the DIN-SQL and MAC-SQL prompts. # 9 RELATED WORK LLMs for NL2SQL: Today, the leaderboards for NL2SQL benchmarks like Spider and BIRD are dominated by LLM-based solutions [8, 9, 19, 27], while earlier methods leading the benchmarks were mostly based on manually-tweaked encoder-decoder LSTMbased architectures, e.g. [24, 33]. Top-performing NL2SQL systems primarily focus on question representation and information organization. DIN-SQL [20] uses decompositions and intermediate query representations following chain-of-thought [36] and least-to-most prompting [39] paradigms. DAIL-SQL [7] evaluates different methods of question representations, code representation, information (metadata) organization and picks the best combination of the three. Contrary to our approach that focuses on optimizing in-domain performance, DAIL-SQL focuses on cross-domain in-context learning and specifically masks domain-specific keywords. CodeS [12] is a pretrained LLM designed specifically for NL2SQL. These methods are orthogonal to our idea of incorporating past query history and combining these method into TailorSQL can further improve performance. SNAILS [15] shows that NL2SQL techniques generally perform worse on databases that use less natural schema names, which further motivates the need for NL2SQL techniques like TailorSQL that can understand obscurely-named tables and columns. Retrieval Methods: Retrieval-augmented generation (RAG) [11] has been employed to boost LLM accuracy across various NLP domains. Bi-encoding retrieval methods encode both question and document separately using the same embedding transformation, and compute their similarity via a distance metric such as cosine similarity. Recent literature shows that variants of the BERT model for embedding [6, 16] exhibit the best retrieval accuracy. However, BERT models are expensive to fine-tune due to their large capacity and a notorious scarcity of training data across many domains. Therefore, a pragmatic approach is to rely on bi-encoded similarity using pretrained models. Cross-encoding [5, 17] feeds the concatenated questiondocument pair into BERT and trains a FCN classifier layer on its vector representation. It commonly outperforms bi-encoded similarity scoring since it is able to capture more complex cross-feature interactions between the question and document. However, this approach is often prohibitively expensive at inference time, as it requires full encoding of each question-document pair at retrieval time. Fine-tuning LLMs: Fine-tuning involves adjusting a pretrained model on a specific, often narrower, dataset or task to enhance its performance in that particular domain. Fine-tuning techniques are commonly classified into supervised [18, 28, 35], unsupervised [38], and reinforcement learning [21, 29] based methods. In the case of NL2SQL pipelines, the past query workload along with synthetically generated user questions could be used as input-output pairs for supervised fine-tuning of the model. This method could potentially deliver better results than the RAG-style solution. However, there are several practical downsides to fine-tuning. First, users may have privacy concerns about using their data to train LLMs shared across users. Second, training and maintaining a separate fine-tuned LLM per database is an expensive operation, especially given the dynamic nature of databases where the data and query workload frequently change.
NL2SQL (natural language to SQL) translates natural language questions into SQL queries, thereby making structured data accessible to non-technical users, serving as the foundation for intelligent data applications. State-of-the-art NL2SQL techniques typically perform translation by retrieving database-specific information, such as the database schema, and invoking a pre-trained large language model (LLM) using the question and retrieved information to generate the SQL query. However, existing NL2SQL techniques miss a key opportunity which is present in real-world settings: NL2SQL is typically applied on existing databases which have already served many SQL queries in the past. The past query workload implicitly contains information which is helpful for accurate NL2SQL translation and is not apparent from the database schema alone, such as common join paths and the semantics of obscurely-named tables and columns. We introduce TailorSQL, a NL2SQL system that takes advantage of information in the past query workload to improve both the accuracy and latency of translating natural language questions into SQL. By specializing to a given workload, TailorSQL achieves up to 2$\times$ improvement in execution accuracy on standardized benchmarks.
[ "cs.DB", "cs.CL" ]
# 1. Introduction When browsing through our camera albums, we often find ourselves wishing to view the videos we’ve captured from different camera poses. For instance, seeing footage originally shot from the side as if it were filmed from the front, or transforming a moving shot into one that appears as if taken from a stationary camera. What if we can freely manipulate the camera movement within recorded videos to re-synthesize them from any viewpoint? This ability will not only revolutionize how we experience our own videos but also impact fields like video editing, 4D content creation [5, 29, 64, 84], virtual reality [17, 82], and Figure 2. Motivation. To edit camera trajectories in monocular videos, we embed knowledge from video geometry prediction models, e.g., MonST3R [107], into video generative models [4, 23], allowing the model to synthesize realistic novel views by filling occluded regions the geometry model cannot infer. By incorporating geometrical cues for generation, our approach demonstrates superior performance on novel view video synthesis, compared to fully generative approaches e.g., Generative Camera Dolly [87]. robotics [10, 39]. In this work, we focus on the task of re-synthesizing a given video along a user-defined camera trajectory, allowing for arbitrary modifications to the camera’s movement and perspective throughout the video, a process we refer to as video camera trajectory editing. This task is inherently related to the extreme case of dynamic novel view synthesis (NVS) given a monocular video [49, 90], as it involves generating views from significantly altered or entirely new camera trajectories that were not present in the original footage. Existing approaches encounter two main challenges when tackling this task: Reconstruction-based methods struggle with unseen areas. The extensive modification to the camera’s path makes the problem highly ill-posed, causing existing reconstruction-based methods for dynamic NVS [15, 95, 102, 107, 110] to fail in synthesizing visually realistic novel views, as illustrated in Fig. 2-(b). Because these methods focus on accurately reconstructing observed regions rather than handling unseen areas, they cannot accommodate the significant extrapolation required when the new camera trajectory deviates greatly from the original. Generation methods require large-scale 4D data. While generative models [21, 28, 54, 72, 73, 76] have shown promising results in synthesizing highly realistic novel views in static scenes by training on large-scale multi-view image datasets (3D data) [14, 70, 111], applying this approach to dynamic scenes is challenging due to the limited availability of extensive real-world multi-view videos (4D data). Recent work, e.g., Generative Camera Dolly [87], tackles the problem by training on synthetic multi-view video data. However, they often fail to generalize to realworld videos due to domain gaps, as shown in Fig. 2-(c). In this work, to overcome these challenges in this task, we explore a more practical and data-efficient approach, called Vid-CamEdit, to leveraging generative models, which sidesteps the need for extensive real 4D training data. Instead of taking the data-driven solution, we decompose the task into two sub-tasks: (1) temporallyconsistent geometry estimation and (2) generative video rendering based on the estimated geometry. Specifically, we ground pre-trained video generative models with geometry estimated from off-the-shelf geometry estimation models [37, 98, 107] (Fig. 2-(b)), allowing it to synthesize realistic novel view videos while relying on the geometry as a scaffold. This geometric prior reduces the burden on the generative model, enabling it to focus primarily on enhancing uncertain regions instead of learning full 4D dynamics from scratch, thereby greatly reducing the need for largescale 4D training data. To further reduce the need for 4D data, we incorporate a factorized fine-tuning strategy. By considering the spatiotemporal blocks of our video generative model independently, we train the spatial block with multi-view image (3D) data and train the temporal block with video data. As both 3D and video data are accessible up to scale, the training of generative models no longer requires 4D data. Our main contributions are as follows: • We propose Vid-CamEdit, a geometric-grounded videoto-video translation framework to re-synthesize a userprovided video along a desired camera trajectory. • By effectively grounding the estimated geometry to the video generative model and adopting a factorized finetuning strategy, our framework eliminates the need of large-scale 4D data for high-quality generation. • Extensive experiments on Neu3D [47], ST-NeRF [106], and in-the-wild videos (e.g. uncurated videos from online) validate that our framework achieves superior performance over existing methods [87, 107, 110]. # 2. Related Work Dynamic novel view synthesis via reconstruction. Similar to traditional novel view synthesis, which aims to reconstruct the scene given multi-view observations, dynamic novel view synthesis extends its application to dynamic scenes. Building upon the success of Neural Radiance Fields (NeRF) [58] and 3D Gaussian Splatting [44] for novel view synthesis, existing approaches [7, 15, 19, 66, 88, 90, 95, 101, 102, 110] tackle dynamic scenes by introducing an additional time-dimension [7, 19, 66] or learning time-based deformations [15, 95, 101, 102]. Although these approaches can effectively handle dynamics, due to the lack of ability to extrapolate or estimate unseen regions, novel views can only be obtained from viewpoints near the original input. These limitations hinder its application to inthe-wild videos, leading to large holes in unseen regions and accumulating reprojection errors. Video geometry estimation. Unlike monocular depth estimation (MDE) [42, 67, 68, 99, 100], which infers depth from a single image, video depth estimation must ensure temporal consistency across frames. Early approaches [56, 108] achieved this by fine-tuning MDE models and modeling motions for each input video, refining depth predictions frame by frame. More recent methods [37, 77, 98] leverage generative priors to enhance depth quality and stability. In parallel, a novel approach MonST3R [107] extends DUSt3R’s [91] unique pointmap representation, which capitalizes on accurate correspondence between images [11, 12, 30–33], enables dense 3D scene reconstruction to dynamic scenes by isolating dynamic and static regions. Novel view generation in static scene. The advent of large-scale 3D object datasets such as Objaverse [14] has led to significant progress in novel view generative models for 3D objects [21, 54, 55, 79, 89]. On the other hand, recent works [60, 73, 76] have introduced scene-level novel view generative models based on multi-view scene datasets [13, 52, 53, 70, 111]. Compared to conventional reconstructionbased static NVS methods [34, 35, 44, 58, 85], they demonstrate superior extrapolation and interpolation abilities, particularly when input views are sparse. However, because these models are trained on multi-view image pairs as direct supervision, applying these methods directly to dynamic novel view synthesis requires multi-view video pairs, which are often difficult to obtain. Camera controllable video generation. Building upon the recent success of video diffusion models [4, 23, 103], recent works [1, 26, 93, 97] have achieved cameracontrollable video generation by introducing additional adapters into U-Net-based video diffusion models that accept camera trajectories. More recently, CVD [45] generates multi-view videos by equipping such camera-controllable video diffusion models with an attention-based synchronization module. However, all of these works only enable camera-conditioned video generation, whereas our goal is to tackle camera trajectory editing of the input video. Concurrently, SynCamMaster [2] demonstrates more robust multi-view video generation using Unreal Engine–based synthetic data. However, it only generates stationary videos and has not been fully validated on user-provided video conditional generation. Generative dynamic novel view synthesis. Extending such camera-controllable video models to video-camera trajectory editing is non-trivial, as it requires both semantic understanding and low-level perception of the user-provided video. Generative Camera Dolly [87] is the first attempt at this, paving the way for future research; however, it still shows clear weaknesses in generalizing to in-the-wild videos, being highly fitted to the 4D synthetic training data. 4DiM [94] generates novel view videos conditioned on one or more input images. However, it relies on 4D data from Google Street View and its generalizability to in-the-wild videos has not yet been validated. Recent concurrent efforts, such as ReCapture [105] and CAT4D [96], share goals and motivations akin to ours. For instance, ReCapture enables this by leveraging LoRA for test-time training, whereas our approach generalizes without requiring any additional training at test time. # 3. Methodology # 3.1. Problem definition Given a monocular video as input, which can be captured from either a stationary or a moving camera, our objective of video camera trajectory editing is to design a framework that can synthesize a new video from any desired camera trajectory. We first define the input video with $T$ frames of size $H \times W$ as $X \in \mathbb { R } ^ { T \times H \times W \times 3 }$ and its camera trajectory as $C _ { X } \ \in \ \mathbb { R } ^ { T \times 3 \times 4 }$ , which consists of a series of camera extrinsic matrices. The desired camera trajectory for the novel video $Y ~ \in ~ \mathbb { R } ^ { T \times H \times W \times 3 }$ is defined as $\vec { C _ { Y } } ~ \in ~ \mathbb { R } ^ { T \times 3 \times 4 }$ , where $C _ { Y }$ is obtained by applying per-frame relative camera transformations $C _ { \mathrm { r e l } } \in \mathbb { R } ^ { T \times 3 \times 4 }$ to $C _ { X }$ . Altogether, our framework $\mathcal { F } ( \cdot )$ synthesizes a new video $Y$ conditioned on the input video $X$ and relative camera transformations $C _ { \mathrm { r e l } }$ as follows: $$ \boldsymbol { Y } = \mathcal { F } ( \boldsymbol { X } , C _ { \mathrm { r e l } } , K ) , $$ where we assume both the original and synthesized videos share the same camera intrinsics $K$ . To design this framework, the following conditions must be met: (1) The framework $\mathcal { F } ( \cdot )$ should accept free-form camera trajectories, without being restricted to preset camera trajectories. (2) The synthesized video along the new camera trajectory $Y$ should preserve the geometric structure of the original video $X$ . (3) The synthesized video $Y$ should appear visually realistic, with proper interpolation and extrapolation of regions that are not observed in the original video (e.g., occlusion areas). # 3.2. Overview and motivation To handle extensive extrapolation inherently required for our task, we design $\mathcal { F } ( \cdot )$ as a generative framework, which has shown promising results in large extrapolation in static scenes [54, 73]. However, leveraging generative models for dynamic NVS raises a unique challenge, which is the lack of sufficient real 4D data (multi-view videos). To address this challenge, we explore a practical and data-efficient solution for the framework $\mathcal { F } ( \cdot )$ : a hybrid strategy that grounds strong geometry priors into video generative models. Our key intuition is to reduce the burden on the generative model by simplifying its task. Instead of relying solely on the generative model, we decompose the 4D problem into 3D spatial geometry and 1D temporal dynamics. For the 3D spatial geometry, we utilize a temporally consistent geometry estimation model to capture the 3D structure. As illustrated in Fig. 2-(b), this provides geometric cues to the video generation model, where the video generation model can utilize the geometry as a scaffold for realistic generation. To handle the 1D temporal dynamics, we leverage the temporal consistency capabilities inherent in video generative models. By exploiting these capabilities, we ensure that the generated frames are temporally coherent, preserving motion consistency over time. The overview of our pipeline is illustrated in Fig. 3. # 3.3. Generative rendering from estimated geometry Temporally-consistent geometry estimation. To effectively reduce the burden on the video generative model of our framework $\mathcal { F } ( \cdot )$ , the geometric prediction model $g$ serves as a general model that is capable of estimating temporally consistent geometry of the given video. Although various models can be leveraged as $g ^ { \ 1 }$ , we build up our framework on the recently proposed MonST3R [107], as its joint estimation of consistent camera trajectory and pointmaps can be effectively utilized in our framework. Specifically, the geometry of the input video is represented as a series of pointmaps $G \in \mathbb { R } ^ { T \times H \times W \times 3 }$ , which are coordinate maps indicating the 3D location of each pixel within the global 3D space. For each frame $t$ , the pointmap $G _ { t }$ provides a dense mapping from 2D pixel coordinates $( u , v )$ to their corresponding 3D world coordinates. Geometry-grounded video-to-video translation. With the estimated temporally-consistent geometry, we now reformulate the video generative model in our framework as a geometry-guided video-to-video translation problem. We incorporate the predicted geometry $G$ as a crucial cue alongside the desired camera trajectory. At a high level, this framework can be expressed as: Figure 3. Overview of our framework. Given a video and a target camera trajectory, we first extract video feature tokens using a Video Encoder and obtain the dynamic scene’s temporally consistent geometry through Temporal Geometry Estimation. We then ground the video generative model on this estimated geometry by re-aligning the video feature tokens according to the 2D flow between the source and target camera trajectories. $$ \mathcal { F } ( X , C _ { \mathrm { r e l } } , K ) : = \mathrm { S a m p l e } \big ( p _ { \theta } ( Y \mid X , C _ { \mathrm { r e l } } , K , G ) \big ) , $$ where $p _ { \theta }$ is a learned distribution of a diffusion model $\theta$ and Sample $( \cdot )$ is a sampling function for diffusion reverse process [28, 81]. Although providing the 3D geometry information can facilitate novel view video generation, the model would still need to learn a mechanism that enables NVS that well reflects the input 3D geometry and camera parameters. Assuming that the pre-trained video generative model lacks the ability to explicitly understand 3D representations, we further simplify the task for the video model. Given the pointmap $G _ { t }$ for frame $t$ , we can obtain 2D flow fields $f _ { \mathrm { r e l } } ~ \in ~ \mathbb { R } ^ { T \times H \times W \times 2 }$ by projecting these 3D points onto the target viewpoint. Specifically, for each pixel $( u , v )$ in the source frame, we obtain the 2D flow $f _ { \mathrm { r e l } }$ : $$ f _ { \mathrm { r e l } } ( u , v , t ) = \Pi ( C _ { \mathrm { r e l } } ( t ) \cdot G ( u , v , t ) , K ) - ( u , v ) , $$ where $\Pi ( \cdot )$ is the perspective projection function. This process maps each source pixel to its corresponding location in the target view for the time $t$ , effectively grounding the translation in geometry without requiring the model to handle and understand complex 3D structures directly. We thus reformulate the generative process as: $$ { \mathcal { F } } ( X , C _ { \mathrm { r e l } } , K ) : = { \mathrm { S a m p l e } } \big ( p _ { \theta } ( Y \mid X , f _ { \mathrm { r e l } } ) \big ) , $$ where we note that the previous conditions – camera poses $C _ { \mathrm { r e l } }$ , intrinsics $K$ , and 3D geometry $G$ – are all inherently embedded within the 2D flow maps $f _ { \mathrm { r e l } }$ . This reformulation simplifies the task for the generative model while maintaining geometric consistency through the explicitly computed correspondences. Re-aligning input video tokens. For the reformulated video generative model that takes the input video and 2D flow maps as conditions, we incorporate both conditions into a pretrained video diffusion model [23]. For video conditioning, the model must preserve the input video’s details, such as color and texture. We adopt the architecture of ReferenceNet [36], which has been shown to effectively preserve the low-level semantics of input images [36, 57]. Specifically, our approach is based on a U-Net-based video diffusion model where spatial and temporal blocks are interleaved. On top of this, we define a video encoder $\mathcal { E } _ { \phi }$ that shares the same architecture as the video diffusion model. The feature tokens of $\mathcal { E } _ { \phi }$ are then concatenated into the selfattention map of each spatial block in the diffusion model, which is for spatial interaction within each frame of the novel view video being generated. For flow conditioning, we align the feature tokens of $\mathcal { E } _ { \phi }$ with the flow condition $f _ { \mathrm { r e l } }$ . To this end, we can either explicitly warp the feature tokens of input video [60, 61] or encourage the model to perform reliable internal realignment with flow-conditioning methods [6, 76, 109]. In this work, motivated by GenWarp [76], we rearrange the positional embeddings for input video according to the flow map $f _ { \mathrm { r e l } }$ and employ them as additional positional embeddings, thereby allowing the model to naturally learn the flow condition. Specifically, for a given position $( u , v )$ in frame $t$ , the re-aligned positional encoding $\mathrm { P E ^ { \prime } }$ is computed as: $\mathrm { P E } ^ { \prime } ( u , v , t ) = \mathrm { P E } \big ( u + f _ { \mathrm { r e l } } ( u , v , t ) _ { x } , v + f _ { \mathrm { r e l } } ( u , v , t ) _ { y } , t \big ) ,$ (5) where PE denotes the sinusoidal positional encoding for the input video, and $f _ { \mathrm { r e l } } ( u , v ) _ { x }$ , $f _ { \mathrm { r e l } } ( u , v ) _ { y }$ represents the flow vectors in $x$ and $y$ directions respectively. The re-aligned positional embeddings are additionally incorporated into the video diffusion model alongside the original positional embeddings, enabling the video generative model to take the flow condition with the input video. Note that our re-alignment process is fully differentiable, allowing the overall framework to be trained end-to-end. Further details are provided in Appendix B. Figure 4. Factorized fine-tuning strategy without 4D data. # 3.4. Fine-tuning without 4D data While our geometry-grounded strategy effectively alleviates the computational burden on generative models, na¨ıvely training such models still hinges on real-world 4D data (i.e., multi-view videos), which is prohibitively expensive and impractical to acquire at large scales. Furthermore, training generative models exclusively on synthetic 4D data suffers from a substantial domain gap [87], rendering it suboptimal. We instead adopt a factorized training protocol that capitalizes on more readily available datasets: multi-view images (3D) and conventional video data, similarly to [78, 94]. This shift obviates the need for comprehensive 4D data collection, offering a more scalable solution. Architecture. We employ a video generative model backbone [4, 23] composed of interleaved spatial and temporal interaction blocks. We inject conditioning derived from input video tokens solely into the spatial interaction blocks – thereby converting them into multi-view blocks (see Fig. 4) – to concentrate on 3D synthesis. Meanwhile, the temporal interaction blocks remain dedicated to learning temporal priors. For detailed diagrams, please refer to Appendix B. Block-wise supervision. Given our architectural design, we employ an intuitive factorized training strategy. When training on videos, we freeze the multi-view blocks; and when training on multi-view images, we freeze the temporal blocks. Multi-view images are treated as multi-view videos with $T = 1$ , updating only the multi-view blocks, whereas video data are treated as multi-view videos with the same input and output cameras, updating only the temporal blocks. Here, the conditioning tokens from the video encoder are replaced with a null condition at a predefined probability, similarly to CFG [27]. By alternately freezing these blocks, we mitigate overfitting to either modality and successfully train our model without relying on 4D data. # 4. Experiments # 4.1. Implementation details Our framework consists of two key components, geometry prediction model and video generative model. As mentioned in Section 3.3, we leverage MonST3R [107] as our geometry prediction model. For the video generative model, our framework can leverage any spatio-temporally factorized video diffusion models [4, 9, 23]. Among them, we adopt AnimateDiff [23] based on Stable Diffusion 1.5 [72] as our base model, generating $T = 1 2$ frames at once, as it best fits our computational constraints. To condition the diffusion model with only the input video and cameras (2D flow), we replace the original text condition with CLIP [65] image features. Please refer to Appendix B for additional details. Code and weights will be publicly available. Figure 5. Qualitative results. Given a user-provided monocular video, our method can synthesize high-quality videos along desired camera trajectories. The frames from the original videos are depicted in the yellow box of the top right of each image. Training dataset. For multi-view image data, we utilize RealEstate10K [111], Mannequin-Challenge [48], MegaScene [86], and ScanNet [13]. For temporal finetuning, we initialize the temporal modules from the pretrained checkpoint [23] trained on WebVid-10M [3] and additionally use the TikTok dataset [40] in fine-tuning. # 4.2. Baselines As our task demands extensive interpolation and extrapolation, we primarily compare our method with generation and generalizable methods: Generative Camera Dolly (GCD) [87] and Pseudo-DVS [110]. We also report performance improvements over our baseline: reprojection using MonST3R [107]. We evaluate two variants with MonST3R: all-frame reprojection and per-frame reprojection. When synthesizing a novel view from frame $t$ , all-frame reprojection leverages all pointmaps and dynamic masks from MonST3R’s global alignment, projecting all static points across frames $[ 1 , T ]$ and combining them with dynamic points from frame $t$ Furthermore, to validate our method’s applicability in per-scene 4D reconstruction scenarios, we integrate it into the existing optimization-based framework and compare its performance against Shape of Motions [90], DyniBar [49], and HyperNeRF [62]. # 4.3. Results Qualitative comparisons. Fig. 5 and Fig. 6 show qualitative results and comparisons on in-the-wild videos with the baseline methods [87, 107, 110]. MonST3R [107] and Pseudo-DVS [110] show reasonable performance in some regions, however, failing to synthesize occluded regions. GCD [87] generates synthetic artifacts when dealing with in-the-wild video, failing to generalization. Additionally, we present results of reprojection-and-inpainting [112] baseline, struggling with refining ill-warped artifacts when conditioned on noisy reprojections. In contrast, our method generates feasible videos from new camera trajectories. We also provide more qualitative results in Appendix A. Quantitative comparisons. We perform a quantitative comparison of our method and generalizable reconstruction and generation methods on the multi-view video datasets, Neu3D [47] and ST-NeRF dataset [106] in Tab. 1. For frame consistency, we measure CLIP score between each frame of input videos and generated videos, following [41]. The results show that our method achieves superior performance across all the datasets. Additionally, we report a user study for human preference in Fig. 7, and VBench [38] scores, VLM-based automated benchmarks in Fig. 8. Further details are described in Appendix D. Figure 6. Qualitative comparisons with MonST3R [107], video inpainting [112] w/MonST3R, GCD (Generative Camera Dolly) [87], an Pseudo-DVS [110]. Ours is best in synthesizing visually realistic images while maintaining the original geometry. Table 1. Quantitative results with generalized/generation baselines. We show quantitative comparisons in multi-view dynamic datasets, Neu3D [47] and ST-NeRF [106]. Ours is best in both LPIPS and Frame-Consistency (Frame-Con.), i.e., CLIP score between each frame of the input video and the generated video. Note that datasets containing videos from a stationary camera are used, as our primary baseline, Camera Dolly, does not officially accept input videos from moving cameras, e.g., DyCheck [20]. Shape-of-Motion [90]. Qualitative results and details can be found in Appendix A. # 4.4. Analyses Application for per-scene 4D reconstruction. While our primary goal is to directly generate video renderings, our approach can be seamlessly integrated into per-scene 4D reconstruction methods that produce 4D representations as output, by leveraging our generated results as additional supervision. As shown in Tab. 2, quantitative evaluations on the DyCheck dataset [20] demonstrate that incorporating our method into existing per-scene reconstruction pipelines yields higher reconstruction quality. Specifically, following Iterative Dataset Update proposed in [24], we impose our iteratively generated results as additional training signals for Performance on varying trajectory difficulties. We analyze how the performance of our method and the baseline methods changes as the desired camera trajectory becomes more challenging. Specifically, following the evaluation protocol in GeoGPT [71], we measure the LPIPS between the input video and the target GT video as the generation difficulty due to viewpoint change, and consider the LPIPS between the generated video and the target GT video as a degree of distortion. We then compare the degree of distortion against the generation difficulty. As shown in Fig. 9, our method demonstrates superior performance compared to the baseline methods as the difficulty increases. Table 2. Quantitative results of 4D per-scene reconstruction on DyCheck [20]. Employing our method to the existing per-scene reconstruction method yield better reconstruction quality. ∗ indicates the reproduced results for evaluation in occluded regions. Figure 7. User study. The user study is conducted by surveying 59 participants to evaluate (a) consistency to input videos, (b) video realness, and (c) faithfulness on camera trajectories. Ablation on design choices. We provide an ablation study on various design choices in our framework. To validate the effectiveness of the geometry grounding introduced in Sec. 3.3, we test a case where we directly inject camera poses (Plu¨cker coordinates) [80] into the model in place of this grounding. Additionally, we compare the performance against a baseline method: reprojection and video inpainting [112], which is one possible na¨ıve approach to combining geometry estimation models. As shown in Tab. 3, our full framework is most effective in both cases. Ablation on video geometry estimation models. We also provide an ablation study of using various video geometry estimation models in our framework. Specifically, we evaluate: MonST3R [107], DepthAnyVideo [98], DepthAnything2 [100], and DepthCrafter [37]. We evaluate the LPIPS and SSIM values on the Neu3D dataset [47]. As shown in Tab. 4, the results show minimal performance differences across models, indicating that our framework achieves consistent quality regardless of the chosen geometry prediction model.
We introduce Vid-CamEdit, a novel framework for video camera trajectory editing, enabling the re-synthesis of monocular videos along user-defined camera paths. This task is challenging due to its ill-posed nature and the limited multi-view video data for training. Traditional reconstruction methods struggle with extreme trajectory changes, and existing generative models for dynamic novel view synthesis cannot handle in-the-wild videos. Our approach consists of two steps: estimating temporally consistent geometry, and generative rendering guided by this geometry. By integrating geometric priors, the generative model focuses on synthesizing realistic details where the estimated geometry is uncertain. We eliminate the need for extensive 4D training data through a factorized fine-tuning framework that separately trains spatial and temporal components using multi-view image and video data. Our method outperforms baselines in producing plausible videos from novel camera trajectories, especially in extreme extrapolation scenarios on real-world footage.
[ "cs.CV" ]
# 1 Introduction Graph neural networks (GNNs), and specifically message-passing neural networks [16, 17], have become a dominant approach for representation learning on graph-structured data [31, 37]. Since the expressiveness of standard GNNs within the message-passing framework is limited by their inability to distinguish pairs of graphs that pass the one-dimensional Weisfeiler Leman test (WL) [26, 32], a great variety of more expressive architectures has been proposed such as higher-order networks [22, 25, 26], subgraph-GNNs [10, 33, 35] and GNNs augmented with homomorphism counts [3, 19]. Concurrent with the development of new architectures is the pursuit of fundamental questions on the expressiveness of GNNs: which functions can be represented by GNNs, and which nodes or graphs can be distinguished. We focus on the later, which we call separating power. Morris et al. [26] and Barcelo et al. [4], building on earlier results by Cai et al. [7], gave logical characterizations of the separating power of GNN architectures. The node separating power of GNNs matches that of Graded Modal Logic formulas. Higher-order $k$ -GNNs can be characterized in terms of $( k - 1 )$ -WL [26] and have the separating power of First-Order Logic with $k$ variables and counting quantifiers, yielding a hierarchy of increasingly expressive models that can separate nodes in graphs of size $n$ up to isomorphism when $k \geq n$ . In this work, we introduce Hierarchical Ego GNNs (HE-GNNs), inspired by the Individualization and Refinement paradigm used by graph isomorphism solvers, where node individualization is alternated with simple message-passing. We study the separating power of HE-GNN in detail through a logical lens, and make explicit connections between subgraph GNNs, homomorphism count enriched GNNs, invidualization refinement and higher-order GNNs. Our main contributions are: • We characterize the separating power of HE-GNNs with and without restrictions on subgraph radius. Specifically, we provide logical characterizations in terms of graded hybrid logic, situate the separating power of HE-GNNs within the WL hierarchy and show that HE-GNNs constitute a strict hierarchy in terms of their nesting depth $d$ . HE-GNNs are able to separate nodes in graphs of size $n$ up to isomorphism when $d \geq n$ , like higher-order GNNs, but at lower cost in terms of space complexity. • We prove that the graph separating power of HE-GNNs with depth $d$ is lower bounded by that of IR with $d$ rounds of invidualization, and show that common subgraph-GNN architectures are special cases of depth-1 HE-GNNs. We further identify a class of graphs with “small egorank” from which low-depth HE-GNNs can compute homomorphism counts. These results generalize and shed new light on known relations between subgraph-GNN, higher-order GNNs and homomorphism count vectors from tree-like graphs. • We confirm empirically that HE-GNNs up to depth 2 improve performance on ZINC-12k and can distinguish strongly regular graphs indistinguishable by 3-GNNs and common subgraph architectures. We believe our results add foundational insights for understanding GNNs from a logical perspective, relating the separating power of existing architectures and guiding the development of models with improved expressiveness. Acknowledgements We thank Martin Grohe, Ronald de Haan, and Carsten Lutz for valuable discussions that contributed to the development of this work. # 2 Background and notation For a set $X$ , we denote by ${ \mathcal { M } } ( X )$ the collection of all finite multisets of elements of $X$ . We write $\oplus$ for vector concatenation, and $\delta _ { x y }$ for Kronecker’s delta (i.e., $\delta _ { x y }$ is 1 if $x = y$ and 0 otherwise). Graphs Fix a set $P$ of binary node features. By a graph we will mean a triple $G = ( V , E , \mathrm { l a b } )$ where $V$ is set of nodes, $E \subseteq V \times V$ is a symmetric and irreflexive edge relation, and l $\mathfrak { l } \mathfrak { b } : V \to 2 ^ { P }$ . The degree of a node $\boldsymbol { v }$ in a graph $G = ( V , E , \mathrm { l a b } )$ is $| \{ u \ | \ ( v , u ) \in \bar { E } \} |$ and the degree of a graph is the maximum degree of its nodes. A pointed graph is a pair $( G , v )$ where $G$ is a graph and $\boldsymbol { v }$ is a node of $G$ . An isomorphism between graphs $G = ( V , E , \mathrm { l a b } )$ and $G ^ { \prime } = ( V ^ { \prime } , E ^ { \prime } , \mathrm { { l a b } } ^ { \prime } )$ is a bijection $f : V \to V ^ { \prime }$ such that $E ^ { \prime } = \{ ( f ( \bar { v } ) , \bar { f } ( u ) ) \mid ( v , u ) \in E \}$ and such that, for all $v \in V$ , $\bar { \mathrm { l a b } ^ { \prime } } ( f ( v ) ) = \mathrm { l a b } ( v )$ . We write $G \cong G ^ { \prime }$ if such a bijection exists. An isomorphism between pointed graphs $( G , v )$ and $( G ^ { \prime } , v ^ { \prime } )$ is defined similarly, with the additional requirement that $f ( v ) = v ^ { \prime }$ . By a node classifier we will mean a function cls from pointed graphs to $\{ 0 , 1 \}$ that is isomorphism invariant (i.e., such that ${ \mathsf { c l s } } ( G , v ) = { \mathsf { c l s } } ( G ^ { \prime } , v ^ { \prime } )$ whenever $( G , v )$ and $\left( G ^ { \prime } , v ^ { \prime } \right)$ are isomorphic). Graph neural networks Let $D , D ^ { \prime } \in \mathbb { N }$ . A graph neural network with input dimension $D$ and output dimension $D ^ { \prime }$ (henceforth: $( D , D ^ { \prime } )$ -GNN) is a tuple $( ( \mathbf { C O M } _ { i } ) _ { i = 1 , \dots L } , ( \mathbf { A G G } _ { i } ) _ { i = 1 , \dots L } )$ with $L \geq 1$ , where, for $1 < i \le L$ , $\mathbf { C O M } _ { i } : \mathbb { R } ^ { 2 D _ { i } } \mathbb { R } ^ { D _ { i + 1 } }$ , $\ O \ A \ G \mathbf { G } \ G _ { i } : \mathcal { M } ( \mathbb { R } ^ { D _ { i } } ) \to \mathbb { R } ^ { D _ { i } }$ with $D _ { 1 } = D$ , $D _ { i } \geq 1$ , and $D _ { L + 1 } = D ^ { \prime }$ . Each such GNN induces a mapping from embeddings to embeddings. More precisely, by a $D$ -dimensional embedding for a graph $G = ( V , E , \mathrm { l a b } )$ we will mean a function emb : $\mathbf { \bar { \Gamma } } V \to \bar { \mathbb { R } } ^ { D }$ . Given as input a graph $G = ( V , E , \mathrm { l a b } )$ and a $D$ -dimensional embedding emb, a GNN $\mathcal { A }$ produces a $D ^ { \prime }$ -dimensional embedding $\mathrm { { e m b ^ { \prime } } }$ as follows: Every $( D , D ^ { \prime } )$ -GNN $\mathcal { A }$ with $D = | \boldsymbol { P } |$ and $D ^ { \prime } = 1$ naturally gives rise to as a node classifier ${ \mathsf { c l s } } _ { \mathcal { A } }$ . To define it, for a graph $G = ( V , E , \mathrm { l a b } )$ , let its multi-hot label encoding be the $| P |$ -dimensional embedding $\operatorname { e m b } _ { G }$ given by $\mathrm { e m b } _ { G } ( v ) = \langle r _ { 1 } , \ldots , r _ { n } \rangle$ with $r _ { i } = 1$ if $p _ { i } \in \mathsf { l a b } ( v )$ and $r _ { i } = 0$ otherwise. Then ${ \mathsf { c l s } } _ { \mathcal { A } } ( G , v ) = 1$ if run $\iota _ { A } ( G , \mathrm { e m b } _ { G } ) ( v ) > 0$ , and ${ \mathsf { c l s } } _ { \mathcal { A } } ( G , v ) = 0$ otherwise. We denote the set of all such node classifiers ${ \mathsf { c l s } } _ { } _ { } A$ also by GNN. The above definition of GNNs does not specify how the functions $\mathbf { C O M } _ { i }$ and $\mathbf { A G G } _ { i }$ are specified. In practice, there are a few common choices for $\mathbf { A G G } _ { i }$ , namely Sum, Min, Max and Mean. These have the expected, point-wise, definition. For instance, Sum maps multisets of $\mathbb { R } ^ { D }$ -tuples to $\mathbb { R } ^ { D }$ -tuples by summing point-wise. As for the functions $\mathbf { C O M } _ { i }$ , these are commonly implemented by a fullyconnected feed-forward neural network (FFNN) using an activation function such ReLU. Some of our results will apply to GNNs with specific aggregation and combination functions, in which case this will be explicitly indicated. Otherwise, the results apply to all GNNs. Graded modal logic The formulas of graded modal logic (GML) are given by the recursive grammar $\phi : : = p \bar { | } \bar { \textsf { T } | } \phi \wedge \psi \ | \ \lnot \phi | \bar { \big | } \bar { \big < } \bar { \psi } ,$ , where $p \in P$ and $k \in \mathbb { N }$ . Satisfaction of such a formula at a node $v$ in a graph $G$ (denoted: ${ \cal G } , v \models \phi )$ is defined inductively, as usual, where $G , v \vdash p$ iff $p \in \mathsf { l a b } ( v )$ , the Boolean operators have the standard interpretation, and $G , v \models \diamondsuit ^ { \geq k } \phi$ iff $| \{ u \ | \ ( v , u ) \in E$ and $G , u \left| = \phi \right\} | \geq k$ . We use $\diamond \phi$ as shorthand for ${ \diamond } ^ { \geq 1 } \phi$ and we use $\square \phi$ as a shorthand for $\lnot \diamond \lnot \phi$ . Every GML-formula $\phi$ gives rise to a node classifier ${ \mathsf { c l s } } _ { \phi }$ where ${ \mathsf { c l s } } _ { \phi } ( G , v ) = 1$ if $G , v \vdash \phi$ and ${ \mathsf { c l s } } _ { \phi } ( G , v ) = 0$ otherwise. Example 2.1. Consider the GML-formula $\phi = \odot ^ { \ge 2 \top } \wedge \sqcup p$ . Then ${ \mathsf { c l s } } _ { \phi } ( G , v ) = 1$ precisely if the node $\boldsymbol { v }$ has at least two successors and all its successors are labeled $p$ . Weisfeiler Leman Fix a countably infinite set of colors $\mathcal { C }$ . A node coloring for a graph $G =$ $( V , E , \mathrm { l a b } )$ is a map col : $V { \mathcal { C } }$ . A coloring is discrete if for all $\boldsymbol { v }$ in $V \mathsf { c o l } ^ { - 1 } ( \mathsf { c o l } ( v ) ) = v$ . By a colored graph we will mean a graph together with a coloring. The Weisfeiler Leman (WL) algorithm takes as input a colored graph $( G , \mathrm { c o l } )$ and an integer $d \geq 0$ . It produce a new coloring for the same graph, denoted ${ \mathrm { W L } } ( G , \operatorname { c o l } , d )$ , which is computed as follows, where HASH is a perfect hash function onto the space of colors: For a graph $G = ( V , E , \mathrm { l a b } )$ , by the initial coloring of $G$ we will mean the coloring $\mathrm { c o l } _ { G }$ given by $\mathrm { c o l } _ { G } ( v ) \ : = \ : \mathrm { H A S H } ( \mathrm { l a b } ( v ) )$ . We will write ${ \mathrm { W L } } ( G , d )$ as a shorthand for ${ \mathrm { W L } } ( G , \operatorname { c o l } _ { G } , d )$ . In other words, ${ \mathrm { W L } } ( G , d )$ denotes the coloring obtained by starting with the initial coloring and applying $d$ iterations of the algorithm. Two pointed graphs $( G , v )$ , $( G ^ { \prime } , v ^ { \prime } )$ are said to be WLindistinguishable (denoted also $( G , v ) \equiv _ { \mathrm { w L } } ( G ^ { \prime } , v ^ { \prime } ) )$ if $\boldsymbol { v }$ and $v ^ { \prime }$ receive the same color after $d$ iterations for $d = \operatorname* { m a x } \{ | G | , | G ^ { \prime } | \}$ —that is, if $\operatorname { W L } ( G , d ) ( v ) = \operatorname { W L } ( G ^ { \prime } , d ) ( v ^ { \prime } )$ . The WL algorithm gives rise to a node classifier for each $d \geq 0$ and subset $S \subseteq { \mathcal { C } }$ , where ${ \mathsf { c l } } { \mathsf { s } } _ { d , S } ^ { \mathrm { W L } } ( G , v ) { \mathsf { = 1 } }$ if $\mathrm { W L } ( G , d ) ( v ) \in S$ and ${ \mathsf { c l s } } _ { d , c } ^ { \mathrm { W L } } ( G , v ) = 0$ otherwise. Note that, by definition, such classifiers cannot distinguish WL-indistinguishable pointed graphs. Three-way equivalence Given a collection $C$ of node classifiers (e.g., all GNN-based node classifiers), we denote by $\rho ( C )$ the equivalence where $( ( G , v ) , ( G ^ { \prime } , v ^ { \prime } ) ) \in \rho ( C )$ if and only if, for all cls $\in C$ , ${ \mathsf { c l s } } ( G , v ) = { \mathsf { c l s } } ( G ^ { \prime } , v ^ { \prime } )$ . In other words, $\rho ( C )$ captures the expressive power of $C$ as measured by the ability to distinguish different inputs. Theorem 2.2. $\rho ( \mathrm { G N N } ) = \rho ( \mathrm { G M L } ) = \rho ( \mathrm { W L } )$ Figure 1: Two non-isomorphic pointed graphs that are WLindistinguishable. The equivalence in separating power between GNNs and WL was proven independently by $\mathrm { X u }$ et al. [32] and Morris et al. [26]. Their equivalence with GML was shown by Barcelo et al. [4]. Indeed, it was shown in [4] that for every GML-formula, there is a GNN that implements the same node classifier: Proposition 2.3. ([4]) For every GML-formula $\phi$ there is a GNN $\mathcal { A }$ such that ${ \mathsf { c l s } } _ { \mathcal { A } } = { \mathsf { c l s } } _ { \phi }$ . Moreover, the GNN in question only uses Sum as aggregation and a single ReLU-FFNN as combination function. The converse does not hold in general, but it does when we bound the degree of the input graph: Proposition 2.4. Let $\mathcal { A }$ be a $( D , D ^ { \prime } )$ -GNN with $D = \left| \boldsymbol { P } \right|$ , let $N > 0$ , and let $X = \{ \mathrm { r u n } _ { A } ( G , \mathrm { e m b } _ { G } ) ( v ) \mid G = ( V , E , \mathrm { l a b } )$ is a graph of degree at most $N$ and $v \in V \}$ In other words $X \subseteq \mathbb { R } ^ { D ^ { \prime } }$ is the set of all node embeddings that $\mathcal { A }$ can produce when run on $a$ graph of degree at most $N$ . Then $X$ is a finite set, and for each $\mathbf { x } \in X$ , there is a GML-formula $\phi _ { \mathbf { x } }$ such that for every pointed graph $( G , v )$ of degree at most $N$ , it holds that $G , v \ \models \ \phi _ { \mathbf { x } }$ iff $\mathbf { r u n } _ { A } ( G , \mathbf { e m b } _ { G } ) ( v ) = \mathbf { x }$ . In particular, for each GNN-classifier ${ \mathsf { c l s } } _ { \mathcal { A } }$ there is $a$ GML-formula $\phi$ such that ${ \mathsf { c l s } } _ { \phi } ( G , v ) = { \mathsf { c l s } } _ { \mathcal { A } } ( G , v )$ for all pointed graphs $( G , v )$ of degree at most $N$ . Proofs for these two propositions are provided in the appendix, as we will build on them. # 3 Hierarchical Ego GNNs In this section, we introduce and study the basic model of Hierarchical Ego GNNs (HE-GNNs). In the next section, we will further refine the model by means of subgraph restrictions. # Hierarchical Ego GNNs • A $( D , D ^ { \prime } )$ -HE-GNN of nesting depth 0 is simply a $( D , D ^ { \prime } )$ -GNN. • A $( D , D ^ { \prime } )$ -HE-GNN of nesting depth $d > 0$ is a pair $( \boldsymbol { B } , \boldsymbol { C } )$ where $\boldsymbol { B }$ is a $( D + 1 , D ^ { \prime \prime } )$ -HE-GNN of nesting depth $d - 1$ and $\mathcal { C }$ is a $( D + D ^ { \prime \prime } , D ^ { \prime } )$ -GNN. Like GNNs, HE-GNNs define mapping from graph embeddings to graph embeddings, as follows: In other words, for each node $v$ , we run $\boldsymbol { B }$ after extending the node embeddings to uniquely mark $v$ , and concatenate the resulting embedding for $v$ to its original embedding. After constructing a new embedding for each $v$ we run $\mathcal { C }$ . Just as in the case of GNNs, each $( D , D ^ { \prime } )$ -HE-GNN $\mathcal { A }$ with $D = | \boldsymbol { P } |$ and $D ^ { \prime } = 1$ naturally gives rise to a node classifier ${ \mathsf { c l s } } _ { \mathcal { A } }$ . Let HE-GNN- $d$ denote all classifiers ${ \mathsf { c l s } } _ { \mathcal { A } }$ where $\mathcal { A }$ is a HE-GNN of nesting depth $d$ . As we will see below, HE-GNN- $d$ , for increasing values of $d$ , form an infinite hierarchy with respect to expressive power, and the pointed graphs $( G , v )$ and $( G ^ { \prime } , v ^ { \prime } )$ from Figure 1, which cannot be distinguished by a GNN, can already be distinguished by a HE-GNN of nesting depth 1. To show this, we will first give a logical characterization of the separating power of HE-GNN- $d$ . Graded hybrid logic Graded hybrid logic (henceforth $\mathbf { G M L } ( \downarrow ) )$ extends GML with variables and the variable binder $\downarrow$ . To be precise, the formulas of $\mathrm { G M L } ( \downarrow )$ are generated by the grammar $\phi : : = p \mid x \mid \neg \phi \mid \phi \land \psi \mid \odot ^ { \geq k } \phi \mid \downarrow x . \phi$ . We will restrict attention to sentences, i.e., formulas without free variables. The definition of satisfaction for a GML-formula at a node $\boldsymbol { v }$ of a graph $G =$ $( V , E , \mathrm { l a b } )$ , extends naturally to $\mathrm { G M L } ( \downarrow )$ -sentences as follows: $G , v \models \downarrow x . \phi$ if $G [ x \mapsto v ] , v \mapsto \phi$ , where $G [ x \mapsto v ]$ denotes a copy of $G$ in in which $x$ is treated as a binary node feature true only at $v$ . By the $\downarrow$ -nesting-depth of a $\mathrm { G M L } ( \downarrow )$ -sentence, we will mean the maximal nesting of $\downarrow$ operators in the sentence. We denote with GML $( \downarrow ^ { d } )$ all sentences with maximal $\downarrow$ -nesting-depth $d$ . Example 3.1. The sentence $\phi = \downarrow x . \diamond \diamond \diamond x$ , which has $\downarrow$ -nesting-depth $^ { 1 }$ , is satisfied by a pointed graph $( G , v )$ precisely if v lies on a triangle. In particular, considering the example in Figure $^ { \small 1 }$ , $\phi$ distinguishes $( G , v )$ from $( G ^ { \prime } , v ^ { \prime } )$ . This also shows that $\mathtt { G M L } ( \downarrow )$ is more expressive than GML. Example 3.2. Building on the above example, the sentence $\psi = \downarrow x . \diamondsuit ( \phi \wedge \diamondsuit ( \phi \wedge \diamondsuit \phi \wedge \diamondsuit ( \phi \wedge x ) ) ) ,$ ), which has $\downarrow$ -nesting-depth 2, is satisfied by $( G , v )$ precisely if v lies (homomorphically) on a cycle of length 4 consisting of nodes that each lie on a triangle. In the literature, hybrid logics often include an $@$ operator, where $@ _ { x } \phi$ states that $\phi$ holds at the world denoted by the variable $x$ . Over undirected graphs, however, every $\mathbf { G M L } ( { \downarrow } , { \ @ } )$ -sentence is already equivalent to a $\mathtt { G M L } ( \downarrow )$ -sentence of the same $\downarrow$ -nesting-depth. The connection between GNNs and GML described in the previous section extends to a connection between HE-GNNs and $\mathtt { G M L } ( \downarrow )$ : Theorem 3.3. $\begin{array} { r } { \rho ( \mathrm { H E \mathrm { - } G N N } ) = \rho ( \mathrm { G M L } ( \cdot ) ) . \ M o r e o v e r , f o r d \geq 0 , \ \rho ( \mathrm { H E \mathrm { - } G N N \mathrm { - } } d ) = \rho ( \mathrm { G M L } ( \cdot ) ) . } \end{array}$ The proof, given in the appendix, is along the lines of Propositions 2.3 and 2.4. Indeed, there is a translation from $\mathrm { G M L } ( \downarrow )$ -sentences to HE-GNNs, and, conversely, over bounded-degree inputs, there is a translation from HE-GNNs to $\mathtt { G M L ( \downarrow ) }$ -sentences. Both translations preserve nesting depth. In Section 5, we put this logical characterization to use to obtain a number of further results. # 4 Hierarchical Ego GNNs with subgraph restriction HE-GNNs with nesting depth $d$ perform message passing on $| G | ^ { d }$ copies of the input graph $G$ with different unique colorings. One way to make these models more manageable is by restricting subgraphs to a fixed radius $r$ around the uniquely marked node, in line with the common approach for subgraph-GNNs ([14, 33, 34, 35]). # Hierarchical Ego Subgraph-GNNs • A $( D , D ^ { \prime } )$ -HES-GNN of depth 0 is simply a $( D , D ^ { \prime } )$ -GNN. • A $( D , D ^ { \prime } )$ -HES-GNN of depth $d > 0$ is a triple $( \boldsymbol { B } , \boldsymbol { \mathcal { C } } , \boldsymbol { r } )$ where $\boldsymbol { B }$ is a $( D + 1 , D ^ { \prime \prime } )$ -HES-GNN of depth $d - 1 , \mathcal { C }$ is a $( D + { \bf \bar { \cal D } } ^ { \prime \prime } , D ^ { \prime } )$ -GNN, and $r$ is a positive integer. Given a graph $G = ( V , R , \mathrm { l a b } )$ , a node $v \in V$ , and a positive integer $r$ , we will denote by $G _ { v } ^ { r }$ the induced subgraph of $G$ containing the radius- $\boldsymbol { \cdot } \boldsymbol { r }$ neighborhood of $v$ . Note: there is only one change compared to the previous version: $G$ got replaced by $G _ { v } ^ { r }$ on line 3. Each $( D , D ^ { \prime } )$ -HES-GNN $\mathcal { A }$ again gives rise to a node classifier $\mathcal { A }$ . We denote with HES-GNN - $r$ the set of such classifiers for radius $r$ and with HES-GNN- $( d , r )$ the restriction to nesting depth $d$ . Graded hybrid subgraph logic $\mathbf { G M L } ( \downarrow , W )$ further extends $\mathrm { G M L ( \downarrow ) }$ with a “within” operator $\boldsymbol { W } ^ { r }$ inspired by temporal logics with forgettable past [1, 8]. The formulas of $\mathbf { G M L } ( \downarrow , W )$ are generated by $\dot { \phi } : : = p | \dot { x } | \neg \dot { \phi } | \phi \wedge \bar { \psi } | \diamondsuit ^ { \ge k } \phi | \breve { \downarrow } x . \phi | \dot { W } ^ { r } \phi$ The definition of satisfaction for $\mathrm { G M L } ( \downarrow )$ -sentences is extended by letting $G , v \mapsto W ^ { r } \phi$ if $G _ { v } ^ { r } , v { \bf \Pi } \Rightarrow \quad$ . We will use $\downarrow _ { W ^ { r } } x . \phi$ as a shorthand for ${ \downarrow x . W ^ { r } \phi }$ . We denote with GML $( \downarrow _ { W } )$ the fragment of GML $( \downarrow , W )$ in which $\downarrow$ and $W$ can only be used in this specific combination with each other, and we denote with GML $\left( \downarrow _ { W ^ { r } } ^ { d } \right)$ (for specific integers $d$ and $r$ ), the further fragment with radius $r$ and where $\downarrow _ { W ^ { r } }$ can be nested at most $d$ times. In terms of separating power, $\mathbf { G M L } ( \downarrow _ { W } )$ is equivalent to $\mathrm { G M L } ( \downarrow )$ , but pairing variable binders with subgraph restrictions of a specific radius serves to decreases expressive power. The connection between HE-GNN and GML(↓) we established in Theorem 3.3 now extends to the case with subgraph restrictions: # Theorem 4.1. 1. $\begin{array} { r } { \rho ( \mathrm { H E S - G N N } ) = \rho ( \mathrm { G M L } ( \downarrow _ { W } ) ) = \rho ( \mathrm { G M L } ( \downarrow ) ) = \rho ( \mathrm { H E - G N N } ) . } \end{array}$ 2. $\rho ( \mathrm { H E S - G N N } _ { - } r ) = \rho ( \mathrm { G M L } ( \downarrow _ { W ^ { r } } ) )$ . Moreover, $\begin{array} { r } { \rho \big ( \mathrm { H E S - G N N - } ( d , r ) \big ) = \rho \big ( \mathrm { G M L } ( \downarrow _ { W ^ { r } } ^ { d } ) \big ) . } \end{array}$ This is established again through a uniform translation from $\mathrm { G M L } ( \downarrow _ { W ^ { r } } ^ { d } )$ sentences to HES-GNN- $( d , r )$ classifiers and a converse uniform translation over bounded degree inputs from HES-GNN- $( d , r )$ classifiers to ${ \mathrm { G M L } } ( { \downarrow } _ { W ^ { r } } ^ { d } )$ sentences. The separating power of HES-GNN- $( d , r )$ classifiers strictly increases with $d$ and $r$ : Theorem 4.2. For $d \geq 1 , r \geq 0$ $\rho ( \mathrm { H E S - G N N - } ( d , r + 1 ) ) \subsetneq \rho ( \mathrm { H E S - G N N - } ( d , r ) ) .$ Theorem 4.3. For $d \geq 0$ and $r \geq 3$ $\rho ( \mathrm { H E S - G N N - } ( d + 1 , r ) ) \subsetneq \rho ( \mathrm { H E S - G N N - } ( d , r ) )$ Note that $\rho ( X ) \subsetneq \rho ( Y )$ means that $X$ has strictly more separating power than $Y$ . # 5 Comparison with other models In this section, we build on the logical characterizations from the previous sections to obtain a number of technical results, drawing connections between isomorphism testing and several GNN architectures by comparing their expressive power with that of HE-GNNs and HES-GNNs. # 5.1 Relationship with Individualization-Refinement The individualization and refinement (IR) paradigm is applied by all state-of-the-art graph isomorphism solvers [11, 20, 23, 28]. In its usual presentation (e.g. [24]), color refinement, cell-selection and individualization procedures are applied in alternating fashion, resulting in a tree whose nodes are labeled by increasingly refined colorings of the input graph, with discrete colorings at the leafs. In practice, some further optimizations are typically implemented, exploiting symmetries to further reduce the size of the tree, but these optimizations do not affect the result of the equivalence test, and we do not consider them here. In order to make a precise comparison, we define WL-IR, where we use WL as the refinement procedure (which is indeed common practice) and we use a simple cell-selection procedure that assumes an order on the set of colors and picks the least non-singleton color. We include an extra input parameter $d$ that controls the number of individualization steps the algorithm can perform. A complete IR algorithm that is guaranteed to produce discrete colorings is obtained by choosing $d \geq | G |$ . The WL-IR algorithm, then, is as follows: We write ${ \mathrm { W L - I R } } ( G , d )$ as shorthand for WL-IR $( G , \mathrm { c o l } _ { G } , d )$ . We can now compare two graphs by choosing suitable $d$ and testing if ${ \mathrm { W L - I R } } ( G , d )$ and ${ \mathrm { W L - I R } } ( G ^ { \prime } , d )$ yield isomorphic trees. Let ${ \cal G } \equiv _ { \mathrm { W L - I R } - d } { \cal G } ^ { \prime }$ if and only if this comparison does not distinguish $G$ from $G ^ { \prime }$ . For $d = 0$ , it is clear that $\equiv$ WL-IR- $\mathbf { \nabla } \cdot d$ corresponds simply to indistinguishability by the Weisfeiler Leman test. If $d$ is sufficiently large, on the other hand, each leaf of the tree is labeled by a discrete coloring, so that WL-IR distinguishes graphs up to isomorphism: Proposition 5.1. Let $G , G ^ { \prime }$ be graphs and $d \geq m i n ( | G | , | G ^ { \prime } | )$ . Then ${ \cal G } \equiv _ { \mathrm { W L - I R } - d } { \cal G } ^ { \prime }$ iff $G \cong G ^ { \prime }$ . Thus, by varying $d$ we obtain a family of increasingly refined equivalence relations for graphs. In order to relate these equivalence relations to those induced by HE-GNNs of different nesting depths, we must first overcome a technical issue. WL-IR is designed to compare graphs, not nodes. Let $G \equiv _ { \mathsf { c l s } } G ^ { \prime }$ if $\{ \{ \mathsf { c l s } ( G , v ) ~ | ~ v \in V \} \ = \{ \{ \mathsf { c l s } ( G ^ { \prime } , v ) ~ | ~ v \in V ^ { \prime } \} \}$ , and $G \equiv _ { C } G ^ { \prime }$ if $G \equiv _ { \mathsf { c l s } } G ^ { \prime }$ for all classifiers cls in $C$ . The graph separating power of WL-IR with depth 0 matches that of GNN. Proposition 5.2. $G \not \equiv _ { \mathrm { W L - I R - 0 } } G ^ { \prime }$ if and only i $f G \neq _ { \mathrm { G N N } } G ^ { \prime }$ The graph separating power of WL-IR- $d$ is a lower bound to that of HE-GNN- $d$ for $d \geq 0$ Theorem 5.3. For $d \geq 0$ , i ${ } ^ { c } G \not \equiv _ { \mathrm { W L - I R - } d } G ^ { \prime }$ , then $G \not \equiv _ { \mathrm { H E - G N N - } d } G ^ { \prime }$ In fact, for connected graphs, and with depth $d + 1$ , HE-GNN node classifiers already suffice: Theorem 5.4. Let $( G , v ) , ( G ^ { \prime } , v ^ { \prime } )$ be connected pointed graphs, $d \geq 0$ . If ${ \cal G } \not \equiv _ { \mathrm { W L - I R - } d } { \cal G } ^ { \prime }$ , there exists a depth $d + 1$ HE-GNN $\mathcal { A }$ such that ${ \mathsf { c l s } } _ { \mathcal { A } } ( G , v ) \neq { \mathsf { c l s } } _ { \mathcal { A } } ( G ^ { \prime } , v ^ { \prime } )$ . Proposition 5.1 and theorems 5.3, 5.4 show that for sufficiently large $d$ , HE-GNN- $d$ classifiers distinguish graphs up to isomorphism. Efficient Graph Learning with Individualization and Refinement Dupty and Lee [12] previously developed a practical graph learning approach following the IR paradigm, but making several approximations by merging individualized graphs into a single representation and treating similar but distinct node features as a single cell. The resulting model is more efficient than HE-GNN, but doesn’t allow for logical characterization or analysis of separating power beyond standard GNNs. # 5.2 Relationship with homomorphism count enriched GNNs In [3], the authors assume a finite set of rooted graphs $\mathfrak { F } = \{ F _ { 1 } , \dots F _ { k } \}$ and, given an input graph $G$ , for each node $v$ , they add the finite homomorphism count vector $\operatorname { h i o m } ( \mathfrak { F } , ( G , v ) )$ to the initial embedding of $\boldsymbol { v }$ before running a GNN. Here, by a rooted graph we mean a pointed graph $( F , u )$ that is connected, i.e., such that every node of $F$ is reachable from $u$ . Note that the input dimensionality of the GNN is thus assumed to be $| P | + k$ instead of $| P |$ . This increases the expressive power of the model. For example, the non-isomorphic nodes in Figure 1 can be distinguished from each other by including the cycle of length 3 (with a distinguished node) as a pointed graph in $\mathfrak { F }$ . We will refer to a GNN that runs over a $\mathfrak { F }$ enriched graph simply as a $\mathfrak { F }$ -GNN. Theorem 5.5. Let $\mathfrak { F }$ be any finite set of rooted graphs each with at most $d$ nodes. Then there is $a$ $( | P | , | P | + | \mathfrak { F } | )$ -HE-GNN $\mathcal { A }$ of nesting depth $d$ such that, for all pointed graphs $( G , v )$ , $$ \operatorname { r u n } _ { A } ( G ) ( v ) = \operatorname { e m b } _ { G } ( v ) \oplus \operatorname { h o m } ( \mathfrak { F } , ( G , v ) ) $$ The HE-GNN in question only uses Sum as aggregation and ReLu-FFNNs as combination functions. In particular, this shows that every $\mathfrak { F }$ -GNN is equivalent to a HE-GNN. 1 The practical value of the above result, however, is limited by the fact that it requires a high nesting depth. As it turns out, for many choices of $\mathfrak { F }$ , a very small nesting depth suffices. We will call a rooted graph $( F , u )$ $c$ -acyclic if every cycle of $F$ passes through $u$ . C-acyclicity is a relaxation of acyclicity, and c-acyclic rooted graphs can be thought of as trees with back-edges. Our next result will imply that when $\mathfrak { F }$ consists of $\mathrm { ~ c ~ }$ -acyclic structures, a nesting depth of 1 suffices. In order to state it in full generality, we need to introduce some further terminology. In particular, we introduce the notion of ego-rank. Figure 2: Rooted $5 { \times } 2$ -grid (root: $u _ { 1 }$ ) Given a rooted graph $( G , v )$ , let $d e p \quad : \quad V \quad \quad V \cup \{ \perp \}$ be a partial function from nodes to nodes and let $\begin{array} { r l } { d e p s ( u ) } & { { } = } \end{array}$ $\{ d e p ( \bar { u } ) , d e p ( d e p ( u ) ) , . . . \} \setminus \{ \bot \}$ be the (finite) set of nodes $u$ transitively “depends on”. We require of the function dep that: 1. $d e p ( v ) = \perp$ . 2. If $( w , u ) \in E$ , then $d e p ( w ) = d e p ( u )$ or $w \in d e p s ( u )$ or $u \in d e p s ( w )$ , 3. Every set of nodes with the same $d e p$ -value induces an acyclic subgraph. The ego-rank of $( G , v )$ is the smallest value of the maximum node rank, where the node rank of a node $u$ is $| d e p s ( u ) |$ , across all ways to choose the function dep subject to the above constraints. Proposition 5.6. For all rooted graphs $( G , v )$ with $G = ( V , E , \mathrm { l a b } )$ , 1. tree-width $\iota ( G ) - 1 \leq e g o - r a n k ( G , v ) \leq | V | .$ 2. ego-rank $\ ( G , v ) = 0$ if and only if $G$ is acyclic. 3. ego-rank $\begin{array} { r } { \mathopen { } \mathclose \bgroup \left( G , v \aftergroup \egroup \right) = 1 } \end{array}$ whenever $( G , v )$ is $c$ -acyclic. The ego-rank of a rooted graph is not upper-bounded by any function in its tree-width, and can be exponential in its tree-depth, as follows from the following example: Example 5.7. The rooted graph consisting of an $n \times 2$ -grid, with one of its corners as the root, as depicted in Figure 2, has ego-rank $n - 1$ for $n \geq 1$ . Theorem 5.8. Let $\mathfrak { F }$ be any finite set of rooted graphs, let $d = \operatorname* { m a x } \{ e g o - r a n k ( F , u ) \mid ( F , u ) \in \mathfrak { F } \}$ . 1. Then there is a $( | P | , | P | + | \mathfrak { F } | )$ -HE-GNN $\mathcal { A }$ of nesting depth d such that, for all pointed graphs $( G , v )$ , $\operatorname { r u n } _ { A } ( G ) ( v ) = \operatorname { e m b } _ { G } ( v ) \oplus \operatorname { h o m } ( \mathfrak { F } , ( G , v ) )$ . The HE-GNN uses multiplication in the combination functions. 2. For each $N > 0$ , there is a $( | P | , | P | + | \mathfrak { F } | )$ -HE-GNN $\mathcal { A }$ of nesting depth d such that, for pointed graphs $( G , v )$ of degree at most $N$ , $\operatorname { r u n } _ { A } ( G ) ( v ) = \operatorname { e m b } _ { G } ( v ) \oplus \operatorname { h o m } ( \mathfrak { F } , ( G , v ) )$ . The HE-GNN uses only Sum as aggregation and ReLu-FFNNs as combination functions. It follows that, in a non-uniform sense, every $\mathfrak { F }$ -GNN is equivalent to a HE-GNN of nesting depth $\operatorname* { m a x } \{ \mathrm { e g o - r a n k } ( F , u ) \mid ( F , u ) \in \mathfrak { F } \}$ using only Sum and ReLu-FFNNs. # 5.3 Relationship with higher-order GNNs $k$ -GNNs, as proposed by Morris et al. [26], apply message-passing between node subsets of size $k$ , where subsets are adjacent when they share exactly $k - 1$ nodes. The separating power of these models is characterized by the $k$ -variable fragment of first-order logic with counting quantifiers ${ \mathsf { C } } ^ { k }$ . Theorem 5.9 ([7, 26]). $\rho ( k { \mathrm { - } } \mathbf { G N N } ) = \rho ( \mathbf { C } ^ { k } )$ $k$ -GNNs become strictly more expressive with larger $k$ , and distinguish graphs of size $n \leq k$ up to isomorphism. As such, they have proven to be a useful yardstick for the development of expressive GNN architectures. The model proposed by Morris et al. requires exponential space and time since it maintains $n ^ { k }$ features in memory, and applies message passing over $n ^ { k + 1 }$ edges. Maron et al. [22] proposed $k$ -IGNs with $O ( n ^ { k - 1 } )$ space complexity and ${ \dot { O } } ( n ^ { k } )$ time complexity, which were shown by Azizian and LeLarge [2] and Geerts [15] to have the same separating power as $k$ -GNNs. HE-GNNs constitute an alternative hierarchy, where nesting depth $d$ yields classifiers that distinguish graphs of size $\leq d$ up to isomorphism (Theorem 5.3). HE-GNNs perform simple message passing on ${ \bar { n } } ^ { d }$ graphs, but they can do this in sequence and hence need to store only $n ^ { 2 }$ node features. Using the logical characterizations we can show the separating power of HE-GNNs with nesting depth $d$ is at most that of $d + 2$ -GNNs, or equivalently the $d + 1$ -WL algorithm. Theorem 5.10. For $d \geq 0$ , $\rho ( ( d + 1 ) \mathbf { - } \mathbf { W } \mathbf { L } ) = \rho ( ( d + 2 ) \mathbf { - } \mathbf { G } \mathbf { N } \mathbf { N } ) \subseteq \rho ( \mathbf { H } \mathbf { E } \mathbf { - } \mathbf { G } \mathbf { N } \mathbf { N } \mathbf { - } d ) .$ This generalizes a recent result by Frasca et al. [14], who showed that the separating power of many common subgraph-GNNs is bounded by that of 2-WL. We further show that this result is optimal, in the sense that HE-GNNs with nesting depth $d$ can distinguish nodes that $d + 1$ -GNNs can not: Theorem 5.11. For $d \geq 0$ , HES-GNN- $^ { \prime } d , 3 )$ can distinguish pointed graphs that cannot be distinguished by $d$ -WL, or equivalently, by a $( d + 1 )$ -GNN. # 5.4 Relationship with subgraph-GNNs Numerous recent studies [14, 33, 34, 35, 36] have proposed variants of subgraph-GNNs, where message passing is applied to a collection of subgraphs. Subgraph-GNNs show state-of-the-art performance on real-world benchmarks such as ZINC molecular property prediction [6, 14, 36]. In particular, a variant of subgraph-GNNs in which nodes receive neighborhood embeddings based on their radius $r$ neighborhood with distinguished center node label, also known as “ego-networks”, have become prominent as simple yet expressive subgraph-GNN architecture [14, 33, 35]. ID-GNNs [33] are HES-GNNs $( \boldsymbol { B } , \boldsymbol { C } )$ with nesting depth 1, where $\mathcal { C }$ is a trivial GNN that doesn’t apply any message passing. Nested GNNs [35] do not use individualization, but perform global pooling over subgraphs followed by message passing over the input graph. Since local aggregation with individualization is strictly more expressive than global aggregation over connected subgraphs [34], the separating power of nested GNN is strictly less than that of HES-GNNs with nesting depth 1. Theorem 4.1 thus provides a logical upper bound to the separating power of these models. Several other generalizations of subgraph-GNNs have recently been proposed [27, 29]. Most related to this work, Qian et al. introduced Ordered Subgraph Aggregation networks that apply message passing on $| G | ^ { \bar { k } }$ copies of $G$ , each labeled with the atomic type of a $k$ size subgraph, and perform aggregation over representations for a predefined selection of the $| G | ^ { k }$ subgraphs. Like HE-GNN this constitutes a strict hierarchy with separating power upper bounded by $k + 1$ -WL, but not by $k$ -WL. It is not immediately clear whether OSAN yields more expressive node classifiers than HE-GNN. On the one hand, OSAN performs global aggregation over arbitrary subsets of an exponential number of subgraph representations per node, whereas HE-GNN relies solely on local message passing. On the other hand, OSAN encodes subgraph information as an unstructured multiset, while HE-GNN combines subgraph representations through a hierarchical message-passing scheme. # 6 Experiments Table 1: Mean absolute error on ZINC-12k, test scores after 10 runs. Figure 3: Fraction of correctly classified isomorphisms of 10 (26,10,3,4) strongly regular graphs. Average and std dev over 10 runs. Strongly regular graphs are 2-WL indistinguishable. We apply HES-GNN-(2,3) to ZINC-12k [13, 18] and compare with standard GCN [21] and GIN [32] layers, which both match our definition of GNNs. We further compare with $\mathcal { F }$ -GIN enriched with homomorphism counts from cycles $\mathcal { C } _ { 3 } , \ldots , \mathcal { C } _ { 1 0 }$ , which were shown by Barcelo et al. [3] to give optimal performance. We use hidden dimension 256 for all models for a maximum of 1000 epochs on a single 20GB gpu. All code used for the experiments is available on git.2 Table 1 shows the achieved mean absolute error after 10 runs and validation score selection. HES-GNNs with depth 1 outperform $\mathcal { F }$ -GIN even though ZINC has cycle counts in its target function. HES-GNN-(2,3) performs equally well as $\mathcal { F }$ -HES-GNN-(1,3), a depth 1 model augmented with homomorphisms counts. Figure 3 shows the performance of hierarchical ego networks on distinguishing strongly regular graphs. We use a synthetic dataset of 30 random isomorphisms of 10 graphs with parameters (26,10,3,4) obtained from [30]. Since strongly regular graph are indistinguishable by 2-WL and hence by 3-GNNs, we should expect that HE-GNN-1 doesn’t perform above chance on this task. Adding cycle counts doesn’t alleviate this, in line with theorem 5.8. Interestingly, depth 2 HE-GNNs and HES-GNNs can distinguish the (26,10,3,4) graphs and generalize to unseen isomorphisms, even with restricted subgraph radius. # 7 Limitations and directions for further research Efficiency Compared to $k$ -GNNs, HE-GNNs store $| G | ^ { 2 }$ instead of $| G | ^ { k }$ node features. Nevertheless, they still require an exponential amount of message passing steps in the nesting depth $d$ , rendering implementations infeasible for large $d$ . IR-based graph isomorphism tests typically reduce tree size via informed cell selection and automorphism pruning, whereas Dupty and Lee [12] address scalability in a learning setting through compressed approximate trees. Further study is needed to explore how techniques from isomorphism testing can be adapted to graph representation learning, and how these optimizations affect expressive power. Expressive power Some questions regarding separating power are left open by our results. In particular, we have identified classes of graphs from which HE-GNN- $d$ can count homomorphisms in terms of size and ego-rank, but these characterizations are not exhaustive. It remains open to give an exact description of HE-GNN distinguishable graphs in terms of homomorphism count vectors. Our results are also limited in scope by the fact that they concern separation power and not the uniform expressibility of functions. Several uniform expressibility results have recently been obtained for GNNs (both relative to first-order logic [4], and in terms of Presburger logic [5]), and it remains to be seen if these results can be extended to HE-GNNs. Beyond expressive power The empirical success of GNNs cannot be understood through expressive power alone, as it also depends on trainability aspects such as convergence and generalization. These remain to be explored more extensively, across GNN architectures and in the context of HE-GNNs. References [1] Rajeev Alur, Marcelo Arenas, Pablo Barcelo, Kousha Etessami, Neil Immerman, and Leonid Libkin. First-order and temporal logics for nested words. In Proceedings of the 22nd Annual IEEE Symposium on Logic in Computer Science, LICS ’07, page 151–160, USA, 2007. IEEE Computer Society. [2] W. Azizian and M. Lelarge. Characterizing the expressive power of invariant and equivariant graph neural networks. CoRR, abs/2006.15646, 2020. [3] Pablo Barceló, Floris Geerts, Juan Reutter, and Maksimilian Ryschkov. Graph neural networks with local graph parameters. Advances in Neural Information Processing Systems, 34:25280– 25293, 2021. [4] Pablo Barceló, Egor V Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, and Juan-Pablo Silva. The logical expressiveness of graph neural networks. In 8th International Conference on Learning Representations (ICLR 2020), 2020. [5] Michael Benedikt, Chia-Hsuan Lu, Boris Motik, and Tony Tan. Decidability of Graph Neural Networks via Logical Characterizations. In Karl Bringmann, Martin Grohe, Gabriele Puppis, and Ola Svensson, editors, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024), volume 297 of Leibniz International Proceedings in Informatics (LIPIcs), pages 127:1–127:20, Dagstuhl, Germany, 2024. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. [6] Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M. Bronstein, and Haggai Maron. Equivariant subgraph aggregation networks. In International Conference on Learning Representations, 2022. [7] Jin-Yi Cai, Martin Fürer, and Neil Immerman. An optimal lower bound on the number of variables for graph identification. Combinatorica, 12(4):389–410, 1992. [8] Balder ten Cate and Luc Segoufin. Transitive closure logic, nested tree walking automata, and XPath. J. ACM, 57(3), March 2010. [9] Jesse Comer. Lovász theorems for modal languages. In Agata Ciabattoni, David Gabelaia, and Igor Sedlár, editors, Advances in Modal Logic, AiML 2024, Prague, Czech Republic, August 19-23, 2024, pages 269–292. College Publications, 2024. [10] Leonardo Cotta, Christopher Morris, and Bruno Ribeiro. Reconstruction for powerful graph representations. Advances in Neural Information Processing Systems, 34:1713–1726, 2021. [11] Paul T. Darga, Hadi Katebi, Mark Liffiton, Igor L. Markov, and Karem Sakallah. Saucy3. http://vlsicad.eecs.umich.edu/BK/SAUCY/, n.d. Software package. [12] Mohammed Haroon Dupty and Wee Sun Lee. Graph representation learning with individualization and refinement. arXiv preprint arXiv:2203.09141, 2022. [13] Vijay Prakash Dwivedi, Chaitanya K Joshi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Benchmarking graph neural networks. Journal of Machine Learning Research, 24(43):1–48, 2023. [14] Fabrizio Frasca, Beatrice Bevilacqua, Michael Bronstein, and Haggai Maron. Understanding and extending subgraph GNNs by rethinking their symmetries. Advances in Neural Information Processing Systems, 35:31376–31390, 2022. [15] Floris Geerts. The expressive power of kth-order invariant graph networks. arXiv preprint arXiv:2007.12035, 2020. [16] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pages 1263–1272. PMLR, 2017. [17] William L. Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. IEEE Data Eng. Bull., 40(3):52–74, 2017. [18] John J Irwin, Teague Sterling, Michael M Mysinger, Erin S Bolstad, and Ryan G Coleman. Zinc: a free tool to discover chemistry for biology. Journal of chemical information and modeling, 52(7):1757–1768, 2012. [19] Emily Jin, Michael Bronstein, Ismail Ilkan Ceylan, and Matthias Lanzinger. Homomorphism counts for graph neural networks: All about that basis. arXiv preprint arXiv:2402.08595, 2024. [20] Tommi Junttila and Petteri Kaski. Engineering an efficient canonical labeling tool for large and sparse graphs. In 2007 Proceedings of the Ninth Workshop on Algorithm Engineering and Experiments (ALENEX), pages 135–149. SIAM, 2007. [21] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017. [22] Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. Advances in neural information processing systems, 32, 2019. [23] Brendan D. McKay and Adolfo Piperno. Nauty and traces user guide. https://cs.anu.edu. au/people/Brendan.McKay/nauty/nug25.pdf, 2014. Accessed: 2025-04-24. [24] Brendan D McKay and Adolfo Piperno. Practical graph isomorphism, ii. Journal of symbolic computation, 60:94–112, 2014. [25] Christopher Morris, Gaurav Rattan, and Petra Mutzel. Weisfeiler and leman go sparse: Towards scalable higher-order graph embeddings. Advances in Neural Information Processing Systems, 33:21824–21840, 2020. [26] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 4602–4609, 2019. [27] Pál András Papp and Roger Wattenhofer. A theoretical comparison of graph neural network extensions. In International Conference on Machine Learning, pages 17323–17345. PMLR, 2022. [28] Adolfo Piperno. Search space contraction in canonical labeling of graphs. arXiv preprint arXiv:0804.4881, 2008. [29] Chendi Qian, Gaurav Rattan, Floris Geerts, Mathias Niepert, and Christopher Morris. Ordered subgraph aggregation networks. Advances in Neural Information Processing Systems, 35:21030– 21045, 2022. [30] E. Spence. Strongly regular graphs on at most 64 vertices. https://www.maths.gla.ac.uk/ \~es/srgraphs.php, n.d. Accessed: 2025-05-10. [31] Petar Veliˇckovi´c. Everything is connected: Graph neural networks. Current Opinion in Structural Biology, 79:102538, 2023. [32] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. [33] Jiaxuan You, Jonathan M Gomes-Selman, Rex Ying, and Jure Leskovec. Identity-aware graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 10737–10745, 2021. [34] Bohang Zhang, Guhao Feng, Yiheng Du, Di He, and Liwei Wang. A complete expressiveness hierarchy for subgraph gnns via subgraph weisfeiler-lehman tests. In International Conference on Machine Learning, pages 41019–41077. PMLR, 2023. [35] Muhan Zhang and Pan Li. Nested graph neural networks. Advances in Neural Information Processing Systems, 34:15734–15747, 2021. [36] Lingxiao Zhao, Wei Jin, Leman Akoglu, and Neil Shah. From stars to subgraphs: Uplifting any GNN with local structure awareness. In International Conference on Learning Representations, 2022. [37] Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. AI open, 1:57–81, 2020. # Missing proofs # A Proofs for section 2 Proposition 2.3. ([4]) For every GML-formula $\phi$ there is a GNN $\mathcal { A }$ such that ${ \mathsf { c l s } } _ { \mathcal { A } } = { \mathsf { c l s } } _ { \phi }$ . Moreover, the GNN in question only uses Sum as aggregation and a single ReLU-FFNN as combination function. Proof. Let $\Phi$ be the smallest set of GML-formulas containing $\phi$ that is closed under taking subformulas. The operator depth of a GML-formula is defined as follows: depth $( p ) = 0$ , depth $( \lnot \phi ) =$ depth $( \phi ) + 1$ , dept $\mathtt { h } ( \phi \wedge \psi ) = \operatorname* { m a x } \{ \mathrm { d e p t h } ( \phi ) , \mathrm { d e p t h } ( \psi ) \} + 1$ depth ${ \mathrm { \Omega } } ( \diamondsuit ^ { \geq n } \phi ) = \operatorname* { d e p t h } ( \phi ) + 1$ . Let $| \Phi | = k$ and let $L$ be the maximal operator depth of a formula in $\Phi$ . Let $$ \mathcal { A } = ( ( \mathrm { C O M } _ { i } ) _ { i = 1 \dots L } , ( \mathrm { A G G } _ { i } ) _ { i = 1 \dots L } ) $$ where • $\mathrm { C O M } _ { 1 } : \mathbb { R } ^ { 2 | P | } \mathbb { R } ^ { k }$ given by $\mathrm { C O M } _ { 1 } ( x _ { 1 } , \dots , x _ { | P | } , y _ { 1 } , \dots , y _ { | P | } ) = ( z _ { 1 } , \dots , z _ { k } )$ with $z _ { i } = x _ { j }$ if $\phi _ { i }$ is of the form $p _ { j }$ and $z _ { i } = 0$ otherwise. • For $i > 1$ , $\mathrm { C O M } _ { i } : \mathbb { R } ^ { 2 k } \mathbb { R } ^ { k }$ is given by $\mathrm { C O M } _ { i } ( x _ { 1 } , \dots , x _ { k } , y _ { 1 } , \dots y _ { k } ) = ( z _ { 1 } , \dots , z _ { k } )$ with $$ z _ { i } = \left\{ \begin{array} { l l } { x _ { i } } & { \mathrm { i f ~ } \phi _ { i } \mathrm { ~ i s ~ o f ~ t h e ~ f o r m ~ } p } \\ { 1 - x _ { j } } & { \mathrm { i f ~ } \phi _ { i } \mathrm { ~ i s ~ o f ~ t h e ~ f o r m ~ } \lnot _ { j } } \\ { \operatorname* { m a x } \{ x _ { j } , x _ { m } \} } & { \mathrm { i f ~ } \phi _ { i } \mathrm { ~ i s ~ o f ~ t h e ~ f o r m ~ } \phi _ { j } \land \phi _ { m } } \\ { 1 \mathrm { ~ i f ~ } y _ { j } \ge n , \mathrm { ~ 0 ~ o t h e r w i s e } } & { \mathrm { i f ~ } \phi _ { i } \mathrm { ~ i s ~ o f ~ t h e ~ f o r m ~ } \diamondsuit _ { j } } \end{array} \right. $$ • Each $\operatorname { A G G } _ { i }$ is (pointwise) sum. It is not difficult to see that $\mathrm { C O M } _ { i }$ can be implemented by a two-layer feed-forward network using the ReLU activation function. In particular, ma $\mathfrak { x } \{ x _ { j } , x _ { m } \}$ can be expressed as $\mathrm { R e L U } ( x _ { j } + x _ { m } - 1 )$ and “1 if $y _ { j } \geq n$ , 0 otherwise” can be expressed as $\mathrm { \tilde { R e L U } } ( 1 - \operatorname { R e L U } ( n - y _ { i } ) )$ . It can be shown by a straightforward induction on $d$ , that, for all $d \geq 0$ , for all $\phi \in \Phi$ of operator depth $d$ , and for all $i > d$ , $\mathrm { e m b } _ { G } ^ { i } ( v ) ( j )$ is 1 if ${ \cal G } , v \vert = \phi _ { j }$ and 0 otherwise. In order to turn $\mathcal { A }$ into a classifier, finally, we extend $\mathrm { C O M } _ { L }$ with one additional linear layer of dimensionality $( k , 1 )$ that takes a vector $( x _ { 1 } , \ldots , x _ { k } )$ and outputs $x _ { i }$ , where $\phi _ { i } = \phi$ . □ Proposition 2.4. Let $\mathcal { A }$ be a $( D , D ^ { \prime } )$ -GNN with $D = \left| \boldsymbol { P } \right|$ , let $N > 0$ , and let $X = \{ \mathrm { r u n } _ { A } ( G , \mathrm { e m b } _ { G } ) ( v ) \mid G = ( V , E , \mathrm { l a b } )$ is a graph of degree at most $N$ and $v \in V \big \}$ . In other words $X \subseteq \mathbb { R } ^ { D ^ { \prime } }$ is the set of all node embeddings that $\mathcal { A }$ can produce when run on a graph of degree at most $N$ . Then $X$ is a finite set, and for each $\mathbf { x } \in X$ , there is a GML-formula $\phi _ { \mathbf { x } }$ such that for every pointed graph $( G , v )$ of degree at most $N$ , it holds that $G , v \ \models \ \phi _ { \mathbf { x } }$ iff $\mathrm { r u n } _ { A } ( G , \mathrm { e m b } _ { G } ) ( v ) = { \bf x }$ . In particular, for each GNN-classifier ${ \mathsf { c l s } } _ { \mathcal { A } }$ there is $a$ GML-formula $\phi$ such that ${ \mathsf { c l s } } _ { \phi } ( G , v ) = { \mathsf { c l s } } _ { \mathcal { A } } ( G , v )$ for all pointed graphs $( G , v )$ of degree at most $N$ . Proof. Let $\begin{array} { r } { \mathbf { \mathcal { A } } \ = \ \left( ( \operatorname { C O M } _ { i } ) _ { i = 1 \dots L } , \left( \operatorname { A G G } _ { i } \right) _ { i = 1 \dots L } \right) } \end{array}$ . By definition, $\mathrm { r u n } _ { \mathcal { A } } ( G , \mathrm { e m b } _ { G } ) ( v )$ is equal to $\mathrm { e m b } _ { G } ^ { L } ( v )$ where, for each $i = 0 \dots L$ , $\mathrm { e m b } _ { G } ^ { i } : V \mathbb { R } ^ { D _ { i } }$ is given by $\bullet \ \mathrm { e m b } _ { G } ^ { 0 } = \mathrm { e m b } _ { G }$ $\bullet \ \mathrm { e m b } _ { G } ^ { i } = \left\{ v : \mathsf { C O M } _ { i } \big ( \mathrm { e m b } _ { G } ^ { i - 1 } ( v ) \oplus \mathsf { A G G } _ { i } \big ( \left\{ \mathsf { f e m b } _ { G } ^ { i - 1 } ( u ) \mid ( v , u ) \in E \right\} \big ) \big ) \mid v \in V \right\} \mathrm { f o r } i > 0$ The main statement can therefore be restated as saying that the set $X _ { L } = \{ \mathrm { e m b } _ { G } ^ { L } ( v ) \mid G = ( V , E , \mathrm { l a b } )$ is a graph of degree at most $N$ and $v \in V \}$ is finite and there is a defining GML-formula $\phi _ { \mathbf { x } }$ for each $\mathbf { x } \in X _ { L }$ (over pointed graphs of degree at most $N$ ). We proceed by induction on $L$ . For $L = 0$ , $\mathrm { e m } \mathsf { b } _ { G } ^ { L }$ equals $\operatorname { e m b } _ { G }$ . In this case, $X$ is equal to the set of all multi-hot encodings, i.e., ${ \cal X } = \{ 0 , 1 \} ^ { D }$ , and for every $\mathbf { x } = ( x _ { 1 } , \ldots , x _ { D } ) \in X _ { L }$ , we can simply choose $\phi _ { \mathbf { x } } = \alpha _ { 1 } \wedge \cdot \cdot \cdot \wedge \alpha _ { D }$ where $\alpha _ { i } = p _ { i }$ if $x _ { i } = 1$ and $\alpha _ { i } = \neg p _ { i }$ if $x _ { i } = 0$ . Next, let $L > 0$ . By induction hypothesis, the claim holds for $X _ { L - 1 }$ . Let us consider pairs $( \mathbf { z } , Z )$ where $\mathbf { z } \in X _ { L - 1 }$ and $Z$ is a multiset of vectors belonging to $X _ { L - 1 }$ , such that $Z$ has cardinality at most $N$ . Note that there are at most finitely many such pairs. For any such pair let $\phi _ { ( \mathbf { z } , Z ) }$ be the GML-formula $$ \phi _ { \mathbf { \pmb { \mathscr { L } } } } \wedge \bigwedge _ { \substack { \mathbf { \# } _ { \mathbf { \# } } \mathbf { \cap } \mathbf { \{ \substack { \hat { \omega } \mathbf { \# } } \mathbf { \hat { \omega } } \mathbf { \omega } \mathbf { \# } \mathbf { \hat { \omega } } \mathbf { \omega } \mathbf { \# } \mathbf { \hat { \omega } } \mathbf { \omega } \mathbf { \# } } } } ( \langle \rangle ^ { \geq n } \phi _ { \mathbf { \# } } \wedge \mathbf { \neg } \langle \rangle ^ { \geq n + 1 } \phi _ { \mathbf { \# } } ) $$ u occurs in $Z$ with cardinality $n$ We now define our formula $\phi _ { \mathbf { x } }$ as the disjunction of $\phi _ { ( \mathbf { z } , Z ) }$ for all pairs $( \mathbf { z } , Z )$ for which it holds that $\mathrm { c o } \mathbf { M } _ { L } ( \mathbf { z } \oplus \mathrm { A G G } _ { L } ( Z ) ) = \mathbf { x }$ . It follows from the construction that $( G , v ) \mid = \phi _ { \mathbf { x } }$ if and only if $\mathrm { e m } \mathbf { b } _ { G } ^ { L } ( v ) = \mathbf { x }$ . □ # B Proofs of logical characterizations in sections 3 and 4 We prove several lemmas building up to a uniform translation from HES-GNN- $( d , r )$ to ${ \mathrm { G M L } } ( { \downarrow } _ { W ^ { r } } ^ { d } )$ (theorem B.5) and a converse uniform translation for graphs with bounded degree (theorem B.6). The logical characterizations in theorems 3.3 and 4.1 follow with small additions or adjustments to the proofs. We will use $\diamondsuit \boldsymbol { k }$ for . Proposition B.1. Every $\mathbf { G M L } ( { \downarrow } , { \ @ } )$ -sentence is equivalent to a $\mathtt { G M L } ( \downarrow )$ -sentence of the same $\downarrow$ -nesting-depth over unordered graphs. Proof. Let $\phi$ be a $\mathbf { G M L } ( { \downarrow } , { \textcircled { \mathrm { { a } } } } )$ -sentence. Let $n$ be the modal depth of $\phi$ , that is, the maximal nesting depth of the modal operators in the sentence, and let $\phi ^ { \prime }$ be the sentence obtained from $\phi$ by replacing every subformula of the form $@ _ { x } \psi$ by $$ \bigvee _ { i = 0 \dots 2 n } ( \underbrace { \bigcirc \cdots \bigcirc } _ { \mathrm { l e n g t h } i } ( x \wedge \psi ) ) $$ Clearly, $\phi ^ { \prime }$ has the same $\downarrow$ -nesting-depth as $\phi$ . A straightforward induction proof shows that $\phi$ and $\phi ^ { \prime }$ are equivalent on unordered graphs. Intuitively, this is because all variables in $\phi$ are bound, and $2 n$ bounds the distance between nodes in the subgraph that the formula $\phi$ can “see”. □ Proposition B.2 (Canonical Form). Let $N d _ { \phi } ^ { W } ( \psi )$ be the number of $W$ operators in $\phi$ that have $\psi$ in their scope. For each ${ \mathrm { G M L } } ( { \downarrow } _ { W ^ { r } } ^ { d } )$ sentence $\phi$ there exists an equivalent ${ \mathrm { G M L } } ( { \downarrow } _ { W ^ { r } } ^ { d } )$ sentence $\psi$ such that for every subformula of the form $\downarrow x _ { i } W ^ { r } \psi ^ { \prime }$ , $i = d + 1 - \stackrel { \cdot } { \cal N } d _ { \psi } ^ { W } ( \psi ^ { \prime } )$ . We call such $\psi$ canonical. Proof. We assume the proposition holds for $d$ and apply induction over formula construction in $\mathrm { G M L } ( \downarrow _ { W ^ { r } } ^ { d + 1 } )$ . Let $\phi _ { 1 } \in \mathbf { \bar { G } } \mathbf { M L } ( \downarrow _ { W ^ { r } } ^ { d } )$ , and $\phi = \downarrow x _ { i } . W ^ { r } \phi _ { 1 }$ . Let $\psi _ { 1 }$ be the canonical form of $\phi _ { 1 }$ and construct $\psi _ { 1 } ^ { * }$ substituting free occurrences of $x _ { i }$ in $\psi _ { 1 }$ by $x _ { d + 1 }$ . Then: $$ \boldsymbol { \psi } = \downarrow x _ { d + 1 } . W ^ { r } \boldsymbol { \psi } _ { 1 } ^ { * } $$ Clearly $\psi \equiv \phi$ , and for every subformula $\downarrow x _ { j } . W ^ { r } \psi ^ { \prime }$ in $\psi$ we have $j = d { + } 2 { - } N d _ { \psi } ^ { W } ( \psi ^ { \prime } )$ . Canonical form is maintained under construction with the other connectives $( \neg , \land , \land ^ { > k } )$ . □ Lemma B.3. Given a finite tuple $( \mathcal { A } ^ { 1 } , \ldots \mathcal { A } ^ { m } )$ where for $1 \leq j \leq m$ , $\mathcal { A } ^ { j }$ is a $( D , D ^ { \prime } )$ -GNN, there exists a $( D , m \cdot D ^ { \prime } )$ -GNN $\mathcal { A }$ such that $\operatorname { r u n } _ { \cal { A } } ( G ) ( u ) = \bigoplus _ { 1 \leq j \leq m } ( \operatorname { r u n } _ { \cal { A } ^ { j } } ( G ) ( u ) )$ . Moreover, if all $\mathcal { A } ^ { j }$ use the same pointwise aggregation function, then $\mathcal { A }$ uses the same aggregation function, and if all $\mathcal { A } ^ { j }$ use ReLU-FFNNs as combination functions, the same holds for $\mathcal { A }$ . Proof. For each $j$ let $\mathcal { A } ^ { j } ~ = ~ ( ( \mathrm { C O M } _ { i } ^ { j } ) _ { i = 1 , \dots , L _ { j } }$ , $\big ( \mathrm { A G G } _ { i } ^ { j } \big ) _ { i = 1 , \dots , L _ { j } } \big )$ . Without loss of generality all $\mathcal { A } ^ { j }$ have the same number of layers $L ^ { \prime }$ . Let $\begin{array} { l } { { \cal L } } \end{array} = \ { \cal L } ^ { \prime } + 1 \ $ . We construct $\boldsymbol { \mathcal { A } } \ =$ $( ( \mathsf { C O M } _ { i } ) _ { i = 1 , \ldots , L } , ( \mathsf { A G G } _ { i } ) _ { i = 1 , \ldots , L } )$ . The first layer copies each embedding $m$ times, i.e. for $v , v ^ { \prime } \in \mathbb { R } ^ { D }$ : $\operatorname { c o } \mathbf { \boldsymbol { \mathrm { M } } } _ { 1 } ( v \oplus v ^ { \prime } ) = \bigoplus _ { 1 \leq j \leq m } ( v ) .$ . Given combination functions $\mathrm { C O M } ^ { j }$ let COM apply each $\mathrm { C O M } _ { j }$ on the associated subspace: $1 { \le } j { \le } m$ $$ \operatorname * { c o u } _ { 1 \leq j \leq m } ( v _ { 1 } \oplus . . . v _ { m } ) \oplus ( v _ { 1 } ^ { \prime } \oplus . . . v _ { m } ^ { \prime } ) = \bigoplus _ { 1 \leq j \leq m } ( { \mathrm { c o u } } ^ { j } ( v _ { j } \oplus v _ { j } ^ { \prime } ) ) $$ Similarly, given aggregation functions $\operatorname { A G G } ^ { j }$ let AGG behave as: $1 { \le } j { \le } m$ $$ \operatorname { \underset { 1 \leq j \leq m } { A G G } } ( M ) = \bigoplus _ { 1 \leq j \leq m } ( \operatorname { A G G } ^ { j } ( M _ { j } ) ) $$ Here, for $\mathbf { A G G } _ { j } : \mathcal { M } ( \mathbb { R } ^ { D _ { j } } ) \ \to \ \mathbb { R } ^ { D _ { j } }$ , $M$ is a multiset of embeddings in $\mathbb { R } ^ { \sum _ { 1 \leq j \leq m } D _ { j } }$ and $M _ { j }$ is the multiset of embeddings in $\mathbb { R } ^ { D _ { j } }$ , obtained by restricting each embedding in $M$ to indices $\begin{array} { r } { [ \sum _ { j ^ { \prime } < j } D _ { j ^ { \prime } } , \sum _ { j ^ { \prime } \leq j } D _ { j ^ { \prime } } ] } \end{array}$ . Note that if each $\operatorname { A G G } _ { j }$ is the same point-wise aggregation function (e.g. sum), then AGG is the same point-wise aggregation function on the concatenated space. $1 { \le } j { \le } m$ We define the remaining layers of $\mathcal { A }$ as: $$ \begin{array} { c } { { \forall l > 1 : \mathrm { A G G } _ { l } = \mathrm { A G G } _ { l - 1 } } } \\ { { \phantom { \forall l > 1 : \mathrm { C O M } _ { l } = \mathrm { C O M } _ { l - 1 } } } } \\ { { \phantom { \forall l > 1 : \mathrm { C O M } _ { l } = \mathrm { C O M } _ { l - 1 } } } } \end{array} $$ Corollary B.4. Given a finite tuple $( \mathcal { A } ^ { 1 } , \ldots \mathcal { A } ^ { m } )$ where for $1 \leq j \leq m$ , $\mathcal { A } ^ { j }$ is a $( D , D ^ { \prime } )$ -HES-GNN with nesting depth $d$ and radius $r$ , there exists a $( D , m \cdot D ^ { \prime } )$ -HES-GNN $\mathcal { A }$ with nesting depth $d$ and radius $r$ such that $\operatorname { r u n } _ { A } ( G ) ( u ) = \bigoplus _ { 1 \leq j \leq m } ( \operatorname { r u n } _ { A ^ { j } } ( G ) ( u ) ) .$ . Moreover, if all $\mathcal { A } ^ { j }$ use the same pointwise aggregation function, then $\mathcal { A }$ uses the same aggregation function, and if all $\mathcal { A } ^ { j }$ use ReLU-FFNNs as combination functions, the same holds for $\mathcal { A }$ . Proof. For $d = 0$ the claim follows from the above lemma. For $d > 0$ and $\mathcal { A } ^ { j } = ( \mathcal { B } ^ { j } , \mathcal { C } ^ { j } )$ , construct $\mathcal { A } = ( B , \mathcal { C } )$ where $\operatorname { r u n } _ { B } ( G ) ( u ) = \bigoplus _ { 1 \leq j \leq m } ( \operatorname { r u n } _ { B ^ { j } } ( G ) ( u ) ) .$ . $\mathcal { C }$ follows the construction of the above lemma, with the exception that the first combination function does not copy the complete vertex embedding $m$ times. Instead, $\mathbf { C O M _ { 1 } }$ now receives a vertex embedding $v \oplus \bigoplus \ ( v _ { j } )$ , where $v$ is the original vertex embedding and $v _ { j }$ is the output of $B ^ { j }$ , and 1≤j≤m produces output M (v ⊕ vj). 1≤j≤m Theorem B.5. Given a finite tuple $( \phi _ { 1 } , \ldots , \phi _ { m } )$ where for $1 \leq j \leq m , \phi _ { j }$ is a ${ \mathrm { G M L } } ( { \downarrow } _ { W ^ { r } } ^ { d } )$ sentence with propositions in $P$ , there exists a $( | P | , m )$ -HE-GNN $\mathcal { A }$ with nesting depth $d$ and radius $r$ such that for all $G$ labeled with $P \operatorname { r u n } _ { A } ( G ) ( u ) = \bigoplus _ { 1 \leq j \leq m } ( { \mathsf { c l s } } _ { \phi _ { j } } ( G , u ) ) .$ . Proof. For $d = 0$ , following the uniform translation from GML to GNN of proposition 2.3, there exist a GNN $A _ { j }$ such that $\mathrm { r u n } _ { \mathcal { A } _ { j } } = \mathsf { c l s } _ { \phi _ { j } }$ . By lemma B.3 there than exists a GNN $\mathcal { A }$ that produces the concatenation of outputs for each $\mathbf { \mathcal { A } } _ { j }$ . We apply induction over $d$ . It suffices to show that any single sentence $\phi$ in $\mathrm { G M L } ( \downarrow _ { W ^ { r } } ^ { d } )$ is implemented by a single HE-GNN $\mathcal { A }$ , since we can construct a HE-GNN with concatenated outputs following corollary B.4. By lemma B.2 we can assume $\phi$ is canonical, hence for every maximal subformula $\downarrow x _ { i } . W ^ { r } ( \psi )$ in $\phi$ , $i = d$ . Let $\downarrow x _ { d } . W ^ { r } \psi _ { 1 } , . . . , \downarrow x _ { d } . W ^ { r } \psi _ { k }$ be all such maximal subformulas. By the semantics of ${ \mathrm { G M L } } ( { \downarrow } _ { W ^ { r } } ^ { d } )$ , for all $1 \leq j \leq k$ : $$ \begin{array} { c } { G , v \Vdash \downarrow x _ { d } . W ^ { r } ( \psi _ { j } ) \ \mathrm { i f f } } \\ { G _ { v } ^ { r } [ x _ { d } \mapsto v ] , v \ V = \psi _ { j } } \end{array} $$ Now when $x _ { d }$ is treated as a binary node feature, $\psi _ { j }$ is a sentence in $\mathrm { G M L } ( \downarrow _ { W ^ { r } } ^ { d } )$ . There thus exists a HE-GNN $B _ { j }$ such that $$ \begin{array} { r } { \mathrm { r u n } _ { \mathcal { B } _ { j } } ( G _ { v } ^ { r } [ x _ { d } \mapsto v ] ) ( v ) = { \mathsf { c l } } \mathsf { s } _ { \psi _ { j } } ( G _ { v } ^ { r } [ x _ { d } \mapsto v ] , v ) } \\ { = { \mathsf { c l } } \mathsf { s } _ { \downarrow _ { x _ { d } } . W ^ { r } ( \psi _ { j } ) } ( G , v ) } \end{array} $$ We construct a single HE-GNN $\boldsymbol { B }$ with nesting depth $d - 1$ that outputs the concatenated outputs for all such $B _ { j }$ , following corollary B.4: $$ \operatorname { r u n } _ { \mathcal { B } } ( G _ { v } ^ { r } [ x _ { d } \mapsto v ] ) ( v ) = \bigoplus _ { 1 \leq j \leq m } \mathsf { c l s } _ { \downarrow \cdot x _ { d } . W ^ { r } ( \psi _ { j } ) } ( G , v ) $$ Now construct $\phi ^ { * }$ from $\phi$ by substituting each $\downarrow x _ { d } . W ^ { r } \psi _ { j }$ with a proposition $q _ { j }$ . Given $G =$ $( V , E , \mathrm { l a b } )$ , let $G ^ { * } ~ = ~ ( V , E , \mathrm { l a b } ^ { * } )$ , where lab∗ extends lab so that $q _ { j } ~ \in ~ \mathrm { l a b } ^ { * } ( \bar { u } )$ iff $G , u \ \models \downarrow$ $x _ { d } . W ^ { r } \psi _ { j }$ . Then: $$ { \mathsf { c l } } { \mathsf { s } } _ { \phi } ( G , v ) = { \mathsf { c l } } { \mathsf { s } } _ { \phi ^ { * } } ( G ^ { * } , v ) $$ Note that $\phi ^ { * } \in \mathrm { G M L }$ . Hence by proposition 2.3 there exists a GNN $\mathcal { C }$ such that for $\mathcal { A } = ( \boldsymbol { B } , \mathcal { C } )$ : $$ \begin{array} { r } { \mathtt { c l s } _ { \phi ^ { * } } ( G ^ { * } , v ) = \mathtt { r u n } _ { \mathcal { C } } ( G ^ { * } , v ) } \\ { = \mathtt { r u n } _ { \mathcal { A } } ( G , v ) } \end{array} $$ Theorem B.6. Let $\mathcal { A }$ be a $( D , D ^ { \prime } )$ -HES-GNN with $D = | { \cal P } |$ , nesting depth $d _ { \mathrm { { z } } }$ , radius r. Let $N \geq 0$ and $X = \{ \operatorname { r u n } _ { A } ( G , \operatorname { e m b } _ { G } ) ( v ) | G = ( V , E , \operatorname { l a b } )$ is a graph of degree at most $N$ and $v \in V \big \}$ Then $X$ is a finite set, and for each ${ \textbf { x } } \in \ X$ there is a $\mathrm { G M L } ( \downarrow _ { W ^ { r } } ^ { d } )$ sentence $\phi _ { \mathbf { x } }$ such that for every pointed graph $( G , v )$ of degree at most $N$ it holds that $G , v \ \models \phi _ { \mathbf { x } }$ iff $\mathrm { r u n } _ { \mathcal { A } } ( G ) ( v ) = \mathbf { x }$ . In particular, for each HES-GNN- $( d , r )$ classifier cls $\boldsymbol { \mathcal { A } }$ there is a ${ \mathrm { G M L } } ( { \downarrow } _ { W ^ { r } } ^ { d } )$ sentence $\phi$ such that ${ \mathsf { c l s } } _ { \phi } ( G , v ) = { \mathsf { c l s } } _ { \mathcal { A } } ( G , v )$ for all pointed graphs $( G , v )$ of degree at most $N$ . Proof. We apply induction over $d$ . The case $d = 0$ reduces to the translation from GNN to GML of proposition 2.4. Let $\mathcal { A } = ( B , \mathcal { C } )$ be a HES-GNN with depth $d$ and radius $r$ . $X$ is finite since $\boldsymbol { B }$ has finitely many output embeddings by the induction hypothesis, and $\mathcal { C }$ produces finitely many output features on graphs with finitely many input embeddings as shown in the proof of proposition 2.4. Let em $\mathsf { \Omega } ^ { u } = \{ \mathsf { e m } \mathsf { b } ( u ^ { \prime } ) \oplus \delta _ { u u ^ { \prime } } \mid u ^ { \prime } \in V \}$ , then for each $\mathbf { x } \in X$ : $$ \begin{array} { c c } { { \mathrm { r u n } _ { A } ( G ) ( v ) = \mathbf { x i f f } } } \\ { { \mathrm { r u n } _ { { \mathscr { C } } } ( G , \{ \mathrm { e m b } _ { G } ( u ) \oplus \mathrm { r u n } _ { { \mathscr { B } } } ( G _ { u } ^ { r } , \mathrm { e m b } ^ { u } ) | u \in V \} ) ( v ) = \mathbf { x i f f } } } \\ { { \mathrm { r u n } _ { { \mathscr { C } } } ( G ^ { * } ) ( v ) = \mathbf { x } } } \end{array} $$ Where given $G = ( V , E , \mathrm { l a b } )$ , $G ^ { * } = ( V , E , \mathrm { l a b } ^ { * } )$ and $\boldsymbol { { 1 } \mathrm { a b } ^ { * } }$ extends lab with propositions for all output features of $\boldsymbol { B }$ . Specifically, let $Y _ { B } = \{ \mathbf { y _ { 1 } } , \dotsc \mathbf { y } | Y _ { B } | \}$ be this finite set of output features and introduce $q _ { 1 } \dots q _ { | Y _ { B } | }$ such that $q _ { j } \in \left. ^ { * } ( u ) \right.$ if and only if $\begin{array} { r } { \operatorname { r u n } _ { B } ( G _ { u } ^ { r } , \cosh ^ { u } ) ( u ) = \mathbf { y _ { j } } } \end{array}$ . By proposition 2.4 there exists a sentence $\xi$ in GML such that: $$ G ^ { \ast } , v \Vdash \xi \operatorname { i f f } \operatorname { r u n } _ { \mathcal { C } } ( G ^ { \ast } ) ( v ) = \mathbf { x } $$$$ \begin{array} { r } { \mathbf G ^ { \prime } , v \in \xi \mathrm { { u f f } } \operatorname { r u n } _ { \mathcal { C } } ( \mathbf G ^ { \prime } ) ( v ) = \mathbf { x } } \\ { \phi _ { \mathbf x } = \xi \big [ \downarrow x _ { d } . W ^ { r } \phi _ { \mathbf y _ { 1 } } , \dots , \downarrow x _ { d } . W ^ { r } \phi _ { \mathbf y _ { | \mathbf Y _ { B } | } } / q _ { 1 } , \dots q _ { | Y _ { B } | } \big ] . } \end{array} $$ Here all $\downarrow x _ { d } . W ^ { r } ( \phi _ { \mathbf { y _ { i } } } )$ are sentences in $\mathrm { G M L } ( \downarrow _ { W ^ { r } } ^ { d } )$ by the induction hypothesis, so that the same holds for $\phi _ { \mathbf { x } }$ . □ Small adaptations can be made to the proofs of theorems B.5 and B.6 to show $\rho ( \mathrm { H E \mathrm { - } G N N \mathrm { - } } d ) =$ $\rho ( \mathrm { G M L } ( \downarrow ^ { d } ) )$ , where now all operators $\boldsymbol { W } ^ { r }$ are removed and no subgraph restrictions are applied. Sinc $\mathbf { \partial } : \mathrm { H E - G N N } = \cup _ { d \geq 0 } \mathrm { H E - G N N - } d$ and $\mathrm { G M L } ( \downarrow ) = \cup _ { d \geq 0 } \mathrm { \bf G M L } ( \downarrow ^ { d } )$ it follows that $\rho ( \mathrm { H E \mathrm { - } G N N } ) =$ $\rho ( \mathrm { G M L } ( \downarrow ) )$ . That is: Theorem 3.3. $\rho ( \mathrm { H E \mathrm { - } G N N } ) = \rho ( \mathrm { G M L } ( \downarrow ) )$ . Moreover, for $d \geq 0$ , $\rho ( \mathrm { H E \mathrm { - } G N N \mathrm { - } } d ) = \rho ( \mathrm { G M L } ( \downarrow ^ { d } ) )$ Lemma B.7. Let $r \geq 0$ , then $\rho ( \mathrm { G M L } ( \downarrow ^ { d } ) ) \subseteq \rho ( \mathrm { G M L } ( \downarrow _ { W ^ { r } } ^ { d } ) )$ Proof. Let $\phi \in { \bf G M L } ( \downarrow _ { W ^ { r } } ^ { d } )$ . We define sentence $\xi$ , which only holds at vertices of distance $\leq r$ to $x _ { i }$ : $$ \xi = x _ { i } \vee \bigvee _ { 1 \leq j \leq r } \diamond _ { j } x _ { i } $$ Now take a minimal subformula of the form $\downarrow x _ { i } . W ^ { r } \psi$ in $\phi$ . We substitute this for an equivalent subformula $\downarrow x _ { i } . \psi ^ { \prime }$ , where ${ \diamondsuit } ^ { \geq k } \tau$ in $\psi$ is replaced by $\diamond ^ { \geq k } ( \xi \wedge \tau )$ to produce $\psi ^ { \prime }$ . Applying this transformation recursively to subformulas in $\phi$ yields an equivalent $\mathrm { { G M L } } ( \downarrow ^ { d } )$ sentence. □ # Theorem 4.1. 1. $\begin{array} { r } { \rho ( \mathrm { H E S - G N N } ) = \rho ( \mathrm { G M L } ( \downarrow _ { W } ) ) = \rho ( \mathrm { G M L } ( \downarrow ) ) = \rho ( \mathrm { H E - G N N } ) . } \end{array}$ 2. $\rho ( \mathrm { H E S - G N N } _ { - } r ) = \rho ( \mathrm { G M L } ( \downarrow _ { W ^ { r } } ) )$ . Moreover, $\begin{array} { r } { \rho \big ( \mathrm { H E S - G N N - } ( d , r ) \big ) = \rho \big ( \mathrm { G M L } ( \downarrow _ { W ^ { r } } ^ { d } ) \big ) . } \end{array}$ Proof. Theorems B.5 and B.6 yield $\rho ( \mathrm { H E S - G N N - } ( d , r ) ) = \rho ( \mathrm { G M L } ( \downarrow _ { W ^ { r } } ^ { d } ) )$ for all $d , r \geq 0$ . Since HE-GNN- $\cdot r = \cup _ { d \geq 0 } \mathrm { H E S - G N N - } ( d , r )$ and $\mathbf { G M L } ( \downarrow _ { W ^ { r } } ) = \cup _ { d \geq 0 } \mathbf { G M L } ( \downarrow _ { W ^ { r } } ^ { d } )$ point (2) follows. Similarly, taking the union over all $r \geq 0$ yields $\rho ( { \mathrm { H E S - G N N } } ) = \rho ( { \mathrm { G M L } } ( { \downarrow } _ { W } ) )$ . Finally, to see that $\rho ( \dot { \mathrm { \bf G M L } } ( \downarrow _ { W } \bar { ) } ) = \rho ( \mathrm { G M L } ( \downarrow ) )$ note that when two pointed graphs are separated by a sentence in GML $( \downarrow )$ they are also separated by a ${ \bf G M L } ( \downarrow _ { W ^ { r } } )$ sentence where $r$ is the size of the largest of the two graphs. Using lemma B.7 then $\rho ( \mathrm { G M L } ( \downarrow _ { W } ) ) = \rho ( \mathrm { G M L } ( \downarrow ) )$ . □ # C Proofs of hierarchy results in section 4 Theorem 4.2. For $d \geq 1 , r \geq 0$ $) , \rho ( \mathsf { H E S - G N N - } ( d , r + 1 ) ) \subsetneq \rho ( \mathsf { H E S - G N N - } ( d , r ) ) .$ The construction of lemma B.7 also shows $\rho \big ( \mathrm { G M L } ( \downarrow _ { W ^ { r + 1 } } ^ { d } ) \big ) \subset \rho \big ( \mathrm { G M L } ( \downarrow _ { W ^ { r } } ^ { d } ) \big )$ for $d \geq 1 , r \geq 0$ . Using the logical characterization of theorem 4.1 then $\rho ( \mathrm { H E S - G N N - } ( d , r + 1 ) ) \subset \rho ( \mathrm { H E S - G N N - } ( d , r ) )$ . Now to show $\rho ( \mathrm { H E S - G N N - } ( d , r + 1 ) ) \neq \rho ( \mathrm { H E S - G N N - } ( d , r ) )$ , consider graphs $G _ { 1 } = C _ { 4 r + 6 }$ and $G _ { 2 }$ consisting of two disjoint cycles $C _ { 2 r + 3 }$ , where all vertices are labeled with the empty set. Let $v _ { 1 } , v _ { 2 }$ be nodes in $G _ { 1 } , G _ { 2 }$ and $\phi = \downarrow x _ { i } . W ^ { r + 1 } ( \diamond _ { r + 1 } ( \neg \diamond ^ { \geq 2 } \top ) )$ . Then: $$ \begin{array} { l } { G _ { 1 } , v _ { 1 } \left| = \phi \right. } \\ { G _ { 2 } , v _ { 2 } \left| \neq \phi \right. } \end{array} $$ Again applying theorem 4.1 $( G _ { 1 } , v _ { 1 } )$ and $( G _ { 2 } , v _ { 2 } )$ can be separated by HES-GNN- $( d , r + 1 )$ for $d \geq 1$ . However, the marked induced subgraphs with radius $r$ around $v _ { 1 } , v _ { 2 }$ are isomorphic and hence cannot be separated by any HE-GNN. Since $( G _ { 1 } , v _ { 1 } ) , ( G _ { 2 } , v _ { 2 } )$ are further indistinguishable by WL: $$ \forall d \geq 0 : \big ( ( G _ { 1 } , v _ { 1 } ) , ( G _ { 2 } , v _ { 2 } ) \big ) \in \rho ( \mathrm { H E S - G N N - } ( d , r ) ) \big ) $$ Theorem 4.3. For $d \geq 0$ and $r \geq 3 , \rho ( \mathrm { H E S - G N N - } ( d + 1 , r ) ) \subsetneq \rho ( \mathrm { H E S - G N N - } ( d , r ) )$ This follows from $$ \begin{array} { r l } & { \rho ( ( d + 2 ) { \cdot } \mathrm { G N N } ) \subseteq \rho ( \mathrm { H E { - } G N N } { - } d ) \subseteq \rho ( \mathrm { H E S } { - } \mathrm { G N N } { - } ( d , r ) ) } \\ & { \rho ( ( d + 2 ) { \cdot } \mathrm { G N N } ) \not \subseteq \rho ( \mathrm { H E S } { - } \mathrm { G N N } { - } ( d + 1 , r ) ) } \end{array} $$ Theorem 5.10 # D Proofs of relations with other models in section 5 # 5.1. Individualization-Refinement Proposition D.1 (([26, 32]). Let $n , D \in \mathbb { N }$ . Then there exists $a$ GNN $\mathcal { A } ^ { s }$ such that for all featured graphs $( G , \mathrm { e m b } )$ , $( G ^ { \prime } , \mathrm { { e m b } ^ { \prime } ) }$ of size $\leq n$ and with embeddings in $\mathbb { R } ^ { D }$ , $i f \operatorname { W L } ( { \dot { G } } , n ) ( v ) \neq$ ${ \mathrm { W L } } ( \bar { G ^ { \prime } } , \bar { n } ) ( v ^ { \prime } )$ then $\mathrm { r u n } _ { \mathcal { A } ^ { s } } ( G ) ( v ) \neq \mathrm { r u n } _ { \mathcal { A } ^ { s } } ( G ^ { \prime } ) ( v ^ { \prime } )$ . We call such $a$ GNN sufficiently separating. Lemma D.2. Let $n , d , D \in \mathbb { N }$ . Then there exists $a$ sufficiently separating HE-GNN $\mathcal { A } ^ { s }$ with nesting depth $d$ such that for all featured graphs $( G , \mathrm { e m b } )$ , $( \tilde { G } ^ { \prime } , \mathrm { e m b } ^ { \prime } )$ of size $\leq n$ and with embeddings in $\mathbb { R } ^ { \hat { D } }$ $, i f ( ( G , v ) , ( G ^ { \prime } , v ^ { \prime } ) ) \notin \rho ( \mathrm { H E \mathrm { - } G N N \mathrm { - } } d )$ then $\operatorname { r u n } _ { A ^ { s } } ( G ) ( v ) \neq \operatorname { r u n } _ { A ^ { s } } ( G ^ { \prime } ) ( v ^ { \prime } )$ . Proof. For $d = 0$ we use the GNN from the proposition above. Suppose the lemma holds for $d$ , call this sufficiently separating HE-GNN $B ^ { s }$ , and let $D ^ { \prime }$ be its output dimension. We then let $\mathcal { A } ^ { s } = ( \mathcal { C } ^ { s } , B ^ { s } )$ , where ${ \mathcal { C } } ^ { s }$ is the GNN of the proposition above for input dimension $D + D ^ { \prime }$ . Suppose a HE-GNN $\mathcal { A } ^ { \prime } = ( B ^ { \prime } , \mathcal { C } ^ { \prime } )$ with nesting depth $d + 1$ separates $( G , v )$ from $( G ^ { \prime } , v ^ { \prime } )$ . Then since the output of $B ^ { s }$ separates all pointed graphs that are separated by any HE-GNN with nesting depth $d$ , and by the proposition above, $\mathcal { A } ^ { s } = ( B ^ { s } , \mathcal { C } ^ { s } )$ also separates $( G , v )$ from $( G ^ { \prime } , v ^ { \prime } )$ . □ Notation Assume an injective map $f$ from the infinite set of colors $\mathcal { C }$ to an infinite set of binary node labels (propositions) $\mathcal { P }$ . Given a graph $G = ( V , E , \mathrm { l a b } )$ we write $G _ { \mathrm { W L } }$ for $G = ( V , E , \mathrm { l a b } ^ { \prime } )$ , where $\mathrm { { l a b } ^ { \prime } }$ is obtained by first computing the coloring ${ \mathrm { c o l } } = { \mathrm { W L } } ( G , | G | )$ and then labeling nodes $v \in V$ with $f ( \mathrm { c o l } ( v ) )$ . Lemma D.3. Let $( G , v ) , ( G ^ { \prime } , v ^ { \prime } )$ be pointed graphs such that $| G | = | G ^ { \prime } |$ . If $( ( G , v ) , ( G ^ { \prime } , v ^ { \prime } ) ) \in$ $\rho ( \mathrm { H E - G N N - } d )$ then $( ( G _ { \mathrm { W L } } , v ) , ( G _ { \mathrm { W L } } ^ { \prime } , v ^ { \prime } ) ) \in \rho ( \mathrm { H E } \mathrm { - } \mathrm { G N N } \mathrm { - } d )$ . Proof. We apply induction over $d$ . For $d = 0$ note that $\rho ( \mathrm { H E \mathrm { - } G N N \mathrm { - } } 0 ) = \rho ( \mathrm { G N N } ) = \rho ( \mathrm { W L } )$ . Let $( G , \bar { v } ) , ( G ^ { \prime } , \bar { v ^ { \prime } } )$ be of size $n$ , with labels in $P$ , such that $( ( G , v ) , ( G ^ { \prime } , v ^ { \prime } ) ) \in \rho ( \mathrm { H E \mathrm { - } G N N \mathrm { - } } d )$ for $d > 0$ . By lemma D.2 there exists a sufficiently separating HE-GNN $\ A ^ { s } = ( B ^ { s } , { \mathcal C } ^ { s } )$ . We show $( \bar { G } _ { \mathrm { W L } } , v ) , ( G _ { \mathrm { W L } } ^ { \prime } , v ^ { \prime } ) \in \rho ( \mathsf { c l s } _ { \mathcal { A } ^ { s } } )$ , since the input embeddings used by $\mathcal { C } ^ { s }$ for $G _ { \mathrm { W L } } , G _ { \mathrm { W L } } ^ { \prime }$ are not more separating than those for ${ \dot { G } } , G ^ { \prime }$ . Recall that a node $u$ in $G$ has embedding: $$ \begin{array} { r } { \mathbf { e m b } _ { G } ( u ) \oplus \mathbf { r u n } _ { \mathcal { B } ^ { s } } ( G , \{ w : \mathbf { e m b } ( w ) \oplus \delta _ { u w } \mid w \in V \} ) ( u ) \mid } \end{array} $$ If $u$ in $G , u ^ { \prime }$ in $G ^ { \prime }$ have the same embedding, then by the induction hypothesis: r $\begin{array} { r } { \operatorname* { u n } _ { B ^ { s } } \big ( G _ { \mathrm { W L } } , \{ w : \mathrm { e m b } ( w ) \oplus \delta _ { u w } \mid w \in V \} \big ) ( u ) = \operatorname* { r u n } _ { B ^ { s } } \big ( G _ { \mathrm { W L } } ^ { \prime } , \{ w : \mathrm { e m b } ( w ) \oplus \delta _ { u ^ { \prime } w } \mid w \in V ^ { \prime } \} \big ) ( u ^ { \prime } ) } \end{array}$ Since this also implies $\mathrm { e m b } _ { G _ { \mathrm { W L } } } ( u ) = \mathrm { e m b } _ { G _ { \mathrm { W L } } ^ { \prime } } ( u ^ { \prime } )$ , the nodes $u$ and $u ^ { \prime }$ have the same embedding for input GWL, G′WL. Since $\mathcal { C } ^ { s }$ is sufficiently separating and $( G , v ) , ( G ^ { \prime } , v ^ { \prime } )$ are not separated by $\mathcal { A } ^ { s }$ , the same holds for $( G _ { \mathrm { W L } } , v )$ , $( G _ { \mathrm { W L } } ^ { \prime } , v ^ { \prime } )$ . Thus $( G _ { \mathrm { W L } } , v ) , ( G _ { \mathrm { W L } } ^ { \prime } , v ^ { \prime } ) \in \rho ( \mathrm { H E \mathrm { - } G N N \mathrm { - } } d )$ . □ We apply the following lemma from Zhang et al. [34] Lemma D.4. Let $G = ( V , E , \mathrm { l a b } )$ , $G ^ { \prime } = ( V ^ { \prime } , E ^ { \prime } , \mathrm { { l a b } ^ { \prime } ) }$ be finite connected graphs with $v \in V , v ^ { \prime } \in$ $V ^ { \prime }$ uniquely marked. Then: $$ \operatorname { W L } ( G ) ( v ) = \operatorname { W L } ( G ^ { \prime } ) ( v ^ { \prime } ) ~ i j f \left\{ \operatorname { W L } ( G ) ( u ) ~ | ~ u \in V \right\} = \left\{ \operatorname { W L } ( G ^ { \prime } ) ( u ) ~ | ~ u \in V ^ { \prime } \right\} $$ Theorem 5.3. For $d \geq 0 ,$ , i $f G \not \equiv _ { \mathrm { W L - I R } - d } G ^ { \prime }$ , then $G \not \equiv _ { \mathrm { H E - G N N - } d } G ^ { \prime }$ Proof. We apply induction over $d$ . At $d = 0$ , the claim follows since $\rho ( \mathrm { G N N } ) = \rho ( \mathrm { W L } )$ . Suppose for graphs $G , G ^ { \prime }$ with labels in $P$ and $d > 0$ , $G \equiv _ { \mathrm { H E - G N N - } ( d ) } G ^ { \prime }$ , then $| G | = | G ^ { \prime } |$ . We show $G \equiv _ { \mathrm { W L - I R - ( d ) } } G ^ { \prime }$ . By lemma D.2 there exists a sufficiently separating HE-GNN $\mathcal { A } ^ { s } = ( B ^ { s } , \mathcal { C } ^ { s } )$ . Since: $$ \left\{ \operatorname { r u n } _ { \mathcal { A } ^ { s } } ( G , v ) \mid v \in V \right\} = \left\{ \operatorname { r u n } _ { \mathcal { A } ^ { s } } ( G ^ { \prime } , v ) \mid v \in V ^ { \prime } \right\} $$ there is a bijection $f$ between the two multisets, such that for all $u \in V$ and for unique proposition $q _ { d + 1 }$ : $$ \begin{array} { r } { \mathrm { r u n } _ { \mathcal { B } ^ { s } } ( G _ { [ q _ { d } \mapsto u ] } ) ( u ) = \mathrm { r u n } _ { \mathcal { B } ^ { s } } ( G _ { [ q _ { d } \mapsto u ^ { \prime } ] } ^ { \prime } ) ( f ( u ) ) } \end{array} $$ Let $C ^ { u } , C ^ { f ( u ) }$ be the connected components containing $u$ and $f ( u )$ . By lemma D.4: $$ \begin{array} { r } { \{ ( C _ { [ q _ { d } \mapsto \boldsymbol { \mathscr { s } } ] } ^ { u } ) _ { \sf K L } \} ( ( C _ { [ q _ { d } \iota \boldsymbol { \mathscr { s } } ] } ^ { u } ) _ { \sf W L } ) ( \boldsymbol { w } ) \mid \boldsymbol { w } \in C ^ { u } \mathbb { J } = \{ { \sf { t r u n } } _ { \mathscr { B } ^ { s } } ( ( C _ { [ q _ { d } \iota f ( u ) ] } ^ { f ( u ) } ) _ { \sf W L } ) ( \boldsymbol { w } ) \mid \boldsymbol { w } \in C ^ { f ( u ) } \} } \end{array} $$ Since these connected components then also obtain the same multiset of output embeddings without a unique marking, this equality extends to the full graphs $G , G ^ { \prime }$ : $$ \{ \operatorname { \{ r u n } _ { \mathcal { B } ^ { s } } ( G _ { [ q _ { d } \mapsto u ] } ) ( w ) \mid w \in V \} = \{ \operatorname { \{ r u n } _ { \mathcal { B } ^ { s } } ( G _ { [ q _ { d } \mapsto f ( u ) ] } ^ { \prime } ) ( w ) \mid w \in V ^ { \prime } \} $$ t follows that $G , G ^ { \prime }$ are indistinguishable by WL-IR- $( d - 1 )$ after marking $u , f ( u )$ : $$ \begin{array} { r l } { \ell \operatorname { r u n } _ { B } \big ( ( G _ { [ q _ { d } \ell u ] } ) _ { \mathrm { W L } } \big ) ( w ) \mid w \in V \big \Vert _ { \mathbf { J } } ^ { \mathbf { \Lambda } } = \ell \operatorname { r u n } _ { B } \big ( ( G _ { [ q _ { d } \ell f ( u ) ] } ^ { \prime } ) _ { \mathrm { W L } } \big ) ( w ) \mid w \in V ^ { \prime } \big \Vert \quad ( \mathrm { L e m m a ~ D . } ) } \\ { ( G _ { [ q _ { d } \ell u ] } ) _ { \mathrm { W L } } \equiv \mathrm { H E . G N N . } ( d - 1 ) \ ( G _ { [ q _ { d } \ell f ( u ) ] } ^ { \prime } ) _ { \mathrm { W L } } ~ } & { \mathrm { ( L e m m a ~ D . } 2 ) } \\ { ( G _ { [ q _ { d } \ell u ] } ) _ { \mathrm { W L } } \equiv _ { \mathrm { W L } \mathrm { I R } \cdot ( d - 1 ) } \ ( G _ { [ q _ { d } \ell f ( u ) ] } ^ { \prime } ) _ { \mathrm { W L } } ~ } & { \mathrm { ( I n d u c t i o n ~ h y p o t h e s i s ) } } \end{array} $$ Since WL-IR $( G , d )$ obtains the same depth $d - 1$ subtrees as WL-IR $( G ^ { \prime } , d )$ , we obtain $G \equiv _ { \mathrm { W L - I R - d } }$ $G ^ { \prime }$ . □ Theorem 5.4. Let $( G , v ) , ( G ^ { \prime } , v ^ { \prime } )$ be connected pointed graphs, $d \geq 0$ . If ${ \cal G } \not \equiv _ { \mathrm { W L - I R - } d } { \cal G } ^ { \prime }$ , there exists a depth $d + 1$ HE-GNN $\mathcal { A }$ such that ${ \mathsf { c l s } } _ { \mathcal { A } } ( G , v ) \neq { \mathsf { c l s } } _ { \mathcal { A } } ( G ^ { \prime } , v ^ { \prime } )$ . Proof. Let $d \geq 0$ . Suppose $\left( G , v \right) \equiv _ { \mathrm { H E - G N N - } d + 1 } \left( G ^ { \prime } , v ^ { \prime } \right)$ . We show $G \equiv _ { \mathrm { W L - I R } - d } G ^ { \prime }$ For a sufficiently separating HE-GNNs $\mathcal { A } ^ { s } = ( B ^ { s } , \mathcal { C } ^ { s } )$ with nesting depth $d + 1$ : $$ \begin{array} { r } { \operatorname { r u n } _ { \mathcal { A } ^ { s } } ( G ) ( v ) = \operatorname { r u n } _ { \mathcal { A } ^ { s } } ( G ^ { \prime } ) ( v ^ { \prime } ) } \end{array} $$ Thus, for unique label $q _ { d + 1 }$ : $$ \begin{array} { r } { \mathrm { r u n } _ { { B ^ { s } } } ( G _ { [ q _ { d + 1 } \mapsto v ] } ) ( v ) = \mathrm { r u n } _ { { B ^ { s } } } ( G _ { [ q _ { d + 1 } \mapsto v ^ { \prime } ] } ^ { \prime } ) ( v ^ { \prime } ) } \end{array} $$ Then since $G , G ^ { \prime }$ are connected: $$ \begin{array} { r l } & { \{ \mathrm { r u n } _ { \mathcal { B } ^ { s } } ( G _ { [ q _ { d + 1 } \mapsto v ] } ) ( u ) \mid u \in V _ { \mathbb { J } } ^ { { \mathbb { J } } } = \{ \mathrm { r u n } _ { \mathcal { B } ^ { s } } ( G _ { [ q _ { d + 1 } \mapsto v ^ { \prime } ] } ^ { \prime } ) ( u ^ { \prime } ) \mid u ^ { \prime } \in V ^ { \prime } \} \} } \\ & { \qquad \quad G \equiv _ { \mathrm { H E \mathrm { - } G N N \mathrm { - } } d } G ^ { \prime } } \\ & { \qquad \quad G \equiv _ { \mathrm { W L \mathrm { - } R \mathrm { - } } d } G ^ { \prime } } \end{array} $$ # 5.2. Homomorphism count enriched GNNs The definitions of homomorphisms and homomorphism counts were omitted from the paper due to lack of space. They are as follows: a homomorphism from a pointed graph $( F , u )$ to a pointed graph $( G , v )$ is a map $h$ from the vertex set of $F$ to the vertex set of $G$ such that 1. for each edge $( w , w ^ { \prime } )$ of $F$ , $( h ( w ) , h ( w ^ { \prime } ) )$ is an edge of $G$ , 2. for each vertex $w$ of $F , l a b _ { F } ( w ) \subseteq l a b _ { G } ( h ( w ) )$ , and 3. $h ( u ) = v$ . Homomorphisms are defined similarly for unpointed graphs $G$ , where we simply omit condition (ii). We use $\hom ( ( F , u ) , ( G , v ) )$ to denote the number of homomorphism from $( F , u )$ to $( G , v )$ . In addition, if $h$ is a partial map from the vertex set of $F$ to the vertex set of $G$ , then we denote by $\mathrm { h o m } _ { h } ( F , G )$ the number of homomorphism that extend $h$ . In particular, $\hom _ { \{ ( u , v ) \} } ( F , G ) =$ $\hom ( ( F , u ) , ( G , v ) )$ . For a set of pointed graphs $\mathfrak { F } = \{ ( F _ { 1 } , u _ { 1 } ) , \dots , ( F _ { m } , u _ { m } ) \}$ and a pointed graph $( G , v )$ , we denote by $\mathrm { h o m } ( \mathfrak { F } , ( G , \overline { { v } } ) )$ the vector of homomorphism counts $$ \langle \hom ( ( F _ { 1 } , u _ { 1 } ) , ( G , v ) ) , \dots , \hom ( ( F _ { m } , u _ { m } ) , ( G , v ) ) \rangle $$ (assuming some ordering on the members of $\mathfrak { F }$ ). Theorem 5.5. Let $\mathfrak { F }$ be any finite set of rooted graphs each with at most $d$ nodes. Then there is $a$ $( | P | , | P | + | \mathfrak { F } | )$ -HE-GNN $\mathcal { A }$ of nesting depth $d$ such that, for all pointed graphs $( G , v )$ , $$ \operatorname { r u n } _ { A } ( G ) ( v ) = \operatorname { e m b } _ { G } ( v ) \oplus \operatorname { h o m } ( \mathfrak { F } , ( G , v ) ) $$ The HE-GNN in question only uses Sum as aggregation and ReLu-FFNNs as combination functions. Proof. We consider the case where $\mathfrak { F }$ consists of a single rooted graph $( F , u )$ with $d$ nodes. The proof extends naturally to the case with multiple such rooted graphs. We will prove the following stronger statement: $( ^ { * } )$ For all sequences $\langle u _ { 1 } , \ldots , u _ { k } \rangle$ of distinct nodes of $F$ (with $k > 0$ ), there is a $( D + k , 1 ) -$ HE-GNN $\mathcal { A }$ of nesting depth $d - k$ , such that, for all graph $G$ and maps $h : \{ u _ { 1 } , \ldots , u _ { k } \} V _ { G }$ , $$ \operatorname { r u n } _ { A } ( G , \operatorname { e m b } _ { G } ^ { + h } ) ( h ( u _ { k } ) ) = \operatorname { h o m } _ { h } ( F , G ) $$ Observe that the special case of $( ^ { * } )$ with $k = 1$ and $u _ { 1 } = u$ yields a $( | P | + 1 , 1 )$ -HE-GNN $\boldsymbol { B }$ of nesting depth $d - 1$ such that $$ \operatorname { r u n } _ { B } ( G , \operatorname { e m b } _ { G } ^ { \prime } ) ( v ) = \operatorname { h o m } ( ( F , u ) , ( G , v ) ) $$ where $\mathbf { \epsilon } \mathrm { e m } \mathbf { b } _ { G } ^ { \prime } = \{ w : \epsilon \mathrm { m } \mathbf { b } _ { G } ( w ) \oplus \left. \delta _ { w h ( u ) } \right. | w \in V _ { G } \}$ . Let $\mathcal { C }$ be the trivial $( | P | { + } 1 , | P | { + } 1 )$ -GNN that implements the identity function. It then follows that $\mathcal { A } = ( B , \mathcal { C } )$ , which is a $( | P | , | P | + 1 )$ -HE-GNN of nesting depth $d$ , has the desired behavior. It remains to prove $( ^ { * } )$ . The proof proceeds by induction on the $d - k$ . When $k = d$ , the partial function $h$ is in fact a total function from the node set of $F$ to that of $G$ . It is easy to implement a GNN that, in this case, outputs 1 if $h$ is a homomorphism and outputs 0 otherwise. Indeed, this can be done using only ReLU-FFNN combination functions and Sum aggregation, and using at most $| V _ { F } |$ many rounds of message passing. We omit the details, as they are straightforward. Next, let $0 < k < d$ and assume that $( ^ { * } )$ holds for $k + 1$ . We will show that it then also holds for $k$ . Since $k > 0$ , there are nodes of $F$ that do not belong to the sequence $\langle u _ { 1 } , \ldots , u _ { k } \rangle$ . It follows from this, by the connectedness of $F$ , that there is an edge of $F$ connecting some $u _ { i }$ (with $\ i \leq k$ ) to some $u ^ { \prime } \notin \{ u _ { 1 } , \ldots , u _ { k } \}$ . As a basic fact about homomorphism counts, we have $$ \mathrm { h o m } _ { h } ( F , G ) = \sum _ { v ^ { \prime } \in V _ { G } \mathrm { ~ s u c h ~ t h a t } ( h ( u _ { i } ) , v ^ { \prime } ) \in E _ { G } } \mathrm { h o m } _ { h \cup \{ ( u ^ { \prime } , v ^ { \prime } ) \} } ( F , G ) $$ We now apply induction hypothesis to $\langle u _ { 1 } , \ldots , u _ { k } , u ^ { \prime } \rangle$ , obtaining a $( | P | + k + 1 , 1 )$ -HE-GNN $\boldsymbol { B }$ . Let $\mathcal { C }$ be a $( | P | + k + 1 , | P | + k + 1 )$ -GNN that performs one round of message passing using Sum aggregation and using the combination function $$ { \bf C O M } \big ( \langle x _ { 1 } , \dots , x _ { | P | + k } , x ^ { \prime } , z _ { 1 } , \dots , z _ { | P | + k } , z ^ { \prime } \rangle \big ) = \langle x _ { 1 } , \dots , x _ { | P | + k } , z ^ { \prime } \rangle $$ (i.e. summing up the values in the $| P | + k + 1$ -th position across all neighbors, and keeping the other values in the vector the same). Let $\mathcal { A } = ( \boldsymbol { B } , \mathcal { C } )$ . It follows from the construction that $$ \operatorname { r u n } _ { A } ( G , \operatorname { e m b } _ { G } ^ { + h } ) ( h ( u _ { i } ) ) = \operatorname { h o m } _ { h } ( F , G ) $$ In other words, after running $\mathcal { A }$ , “node $h ( u _ { i } )$ knows the answer”. All that remains to complete the construction, is to “pass this information from $h ( u _ { i } )$ to $h ( u _ { k } )$ . This can be done by augmenting $\mathcal { C }$ with $| V _ { F } |$ more layers of message passing (because $h ( u _ { i } )$ and $h ( u _ { k } )$ are at most $| V _ { F } |$ distance apart). We omit the details which are straightforward. □ Lemma D.5. For every pointed graph $( F , u )$ of ego-rank $n$ , there is a witnessing dep-function (i.e., with maximum node rank $n$ ) such that 1. dep is well-founded, i.e, $v \neq d e p ^ { n } ( v )$ for all v and $n \geq 1$ . 2. for all nodes $v$ , every connected component of the subgraph induced by $d e p ^ { - 1 } ( \boldsymbol { v } )$ contains a neighbor of $v$ . Equivalently, when $d e p ( w ) = v$ , there is a path from w to v passing only though nodes $w ^ { \prime }$ with $\bar { d e p } ( w ^ { \prime } ) = \bar { v }$ . Proof. 1. If $d e p$ is not well-founded, there is a cycle $$ v _ { 1 } , v _ { 2 } , v _ { 3 } , \ldots , v _ { n } $$ where $d e p ( v _ { i } ) = v _ { i + 1 }$ for $i < n$ and $d e p ( v _ { n } ) = v _ { 1 }$ . Note that, in this case, $d e p s ( v _ { 1 } ) = d e p s ( v _ { 2 } ) =$ $\dots = d e p s ( v _ { n } ) = \{ v _ { 1 } , \dots , v _ { n } \}$ . Fix such a cycle, and let $d e p ^ { \prime }$ be identical to dep except that (i) $d e p ^ { \prime } ( v _ { n } ) = \perp$ , and (ii) for all $v \not \in \{ v _ { 1 } , \ldots \bar { , } v _ { n } \}$ , if $d e p ( v ) \in \{ v _ { 1 } , . . . , v _ { n } \}$ then $d e p ^ { \prime } ( v ) = v _ { 1 }$ . Note that, in this way, $d e p s ^ { \prime } ( v ) =$ $d e p s ( v )$ for all $v \not \in \{ v _ { 1 } , \ldots , v _ { n } \}$ . We claim that $d e p ^ { \prime }$ still satisfies the conditions given in the definition of ego-rank. Indeed, • $d e p ^ { \prime } ( u )$ is still $\perp$ • Let $( w , v ) \in E$ be an edge. Then one of the following three cases holds: (a) $d e p ( w ) = d e p ( v )$ . Then the same holds for $d e p ^ { \prime }$ , except possibly if $w \not \in \{ v _ { 1 } , \ldots , v _ { n } \}$ , $\mathit { d e p } ( w ) \ \in \ \{ v _ { 1 } , \ldots , v _ { n } \}$ , and $v ~ \in ~ \{ v _ { 1 } , \ldots , v _ { n } \}$ . However, in this case, we have that $d e p ^ { \prime } ( w ) = v _ { 1 }$ and hence $v \in d e p s ^ { \prime } ( w ) = \{ v _ { 1 } , . . . , v _ { n } \}$ . (b) $w \in d e p s ( v )$ . It follows from the construction of $d e p ^ { \prime }$ that, for all $v \not \in \ \{ v _ { 1 } , \dotsc , v _ { n } \}$ , $d e p s ^ { \prime } ( v ) = d e p s ( v )$ . Therefore, we only have to consider the case that $v \in \{ v _ { 1 } , \ldots , v _ { n } \}$ . If $w \in \{ v _ { 1 } , \ldots , v _ { n } \}$ , then we have either $w \in d e p s ^ { \prime } ( v )$ or $v \in d e p s ^ { \prime } ( w )$ (note that $v \neq w$ ). Otherwise, by construction, $d e p ^ { \prime } ( w ) = v _ { 1 }$ and hence $v \in d e p s ^ { \prime } ( w )$ . (c) $v \in d e p s ( w )$ . This case is symmetric to the above. • Finally, we must show that, for each $v \in V \cup \{ \bot \}$ , the subgraph induced by $d e p ^ { \prime - 1 } ( v )$ is acyclic. For each node $v \neq v _ { 1 }$ , we have that $d e p ^ { \prime - 1 } ( v ) \subseteq \bar { d e p } ^ { - 1 } ( v )$ , and hence, since $d e p ^ { - 1 } ( \boldsymbol { v } )$ is acyclic, so is $d e p ^ { \prime - 1 } ( v )$ . Therefore, it remains only to consider $d e p ^ { \prime - 1 } ( \perp )$ and $d e p ^ { \prime - 1 } ( v _ { 1 } )$ . Suppose there were a cycle in the subgraph induced by $d e p ^ { \prime - 1 } ( \perp )$ . This cycle must contain the node $\boldsymbol { v } _ { 1 }$ , while all other nodes $u$ on the cycle satisfy $\begin{array} { r } { d e p ( u ) = \perp } \end{array}$ . However, it is easy to see that there can be no edge connecting $v _ { 1 }$ to such a node $u$ . Finally, suppose there were a cycle $$ w _ { 1 } , \ldots , w _ { k } $$ in the subgraph induced by $d e p ^ { \prime - 1 } ( v _ { 1 } )$ . If $d e p ( w _ { 1 } ) = d e p ( w _ { 2 } ) = . . . = d e p ( w _ { k } ) = v _ { i }$ , then the subgraph induced by $d e p ^ { - 1 } ( \bar { v _ { i } } )$ would already have a cycle, which we have assumed is not the case. Therefore, the cycle must include an edge connecting nodes $w _ { i }$ and $w _ { i + 1 }$ where $d e p ( w _ { i } ) \neq$ $d e p ( w _ { i + 1 } )$ . Note that $w _ { i } , w _ { i + 1 } \not \in \{ v _ { 1 } , . . . , \bar { v } _ { n } \}$ and that $d e p ( w _ { i } ) , d e p ( w _ { i + 1 } ) \in \{ v _ { 1 } , \dots , v _ { n } \}$ . Such an edge cannot exist as it fails to satisfy the second property in the definition of ego-rank. 2. We assume dep satisfies property 1. Let $d e p ( w ) = v$ and suppose that property 2 fails, i.e., there is a node $w$ with $d e p ( w ) = v$ such that no node $w ^ { \prime }$ reachable from $w$ in the subgraph induced by $d e p ^ { - 1 } ( v )$ is adjacent to $v$ . Let $d e p ^ { \prime }$ be identical to dep except that, for all $w ^ { \prime }$ reachable from $w$ in the subgraph induced by $d e p ^ { - 1 } ( \boldsymbol { v } )$ , we set $d e p ^ { \prime } ( w ^ { \prime } ) : = \bar { d e p ( \boldsymbol { v } ) }$ . Note that, by property $1 , v \notin d e p s ( v )$ and hence the net effect of this change is that $d e p s ^ { \prime } ( w ^ { \prime } ) \stackrel { \textstyle \cdot } { = } d e p s ( w ^ { \prime } ) \setminus \{ v \}$ . We claim that $d e p s ^ { \prime }$ still satisfies all requirements from the definition of ego-rank. Indeed: • $d e p ^ { \prime } ( u )$ is still $\perp$ . • Let $( w _ { 1 } , w _ { 2 } ) \in E$ be an edge. Then one of the following conditions holds: (a) $d e p ( w _ { 1 } ) = d e p ( w _ { 2 } )$ . Then the same holds for $d e p ^ { \prime }$ (note that $w _ { 1 }$ and $w _ { 2 }$ belong to the same connected component of $d e p ^ { - 1 } ( d e p ( w _ { 1 } ) ) ;$ . (b) $w _ { 1 } \in d e p s ( w _ { 2 } )$ or vice versa. It is easy to see that, in this case, the same still holds for $d e p s ^ { \prime }$ . • Finally, we must show that, for each $x \in V \cup \{ \bot \}$ , the subgraph induced by $d e p ^ { \prime - 1 } ( x )$ is acyclic. It suffices to consider the case where $x = d e p ( v )$ , because, for all other $x$ we have that $d \acute { e } p ^ { \prime - 1 } ( x ) \subseteq d e p ^ { - 1 } ( x )$ . Therefore, let $x = d e p ( \boldsymbol { v } )$ and suppose for the sake of a contradiction that $d e p ^ { \prime - 1 } ( x )$ contains a cycle. Since $d e p ^ { - 1 } ( \dot { x } )$ and $d e p ^ { - 1 } ( v )$ were both acyclic, this cycle must include an edge $( w _ { 1 } , w _ { 2 } )$ such that $d e p ( w _ { 1 } ) = v$ and $d \dot { e } p ( w _ { 2 } ) = x$ . It must then be the case that $w _ { 2 } \in d e p s ( w _ { 1 } )$ , and, indeed, it must be the case that $w _ { 2 } = v$ , a contradiction since the connected component of $w _ { 1 }$ in $d e p ^ { - 1 } ( \boldsymbol { v } )$ was not supposed to be connected to $v$ . Figure 4: Rooted $5 \times 2$ grid By repeating this operation, we obtain a dep-function satisfying property 2. Proposition D.6. The rooted $n \times 2$ -grid (with $n \geq 1 .$ ) as depicted in Figure 2 has ego-rank $n - 1$ Proof. For the $n - 1$ upper bound, a witnessing dep-function is already depicted in Figure 2 for the special case of $n = 5$ , and it can be modified in the obvious way for the general case (note that there are also other choices for the dep function that yield the same ego rank). In what follows, we prove the $n - 1$ lower bound. Let $d e p : V \to V \cup \{ \bot \}$ be any function satisfying the requirements in the definition of ego-rank. We may assume that it also satisfies the properties described in Lemma D.5. Recall that $u _ { 1 }$ is the root of the rooted graph. We show there is a sequence $$ \langle \pi _ { 1 } , \pi _ { 2 } , \ldots , \pi _ { n } \rangle $$ where $\pi _ { 1 } = u _ { 1 }$ , and for $i > 1$ the following holds: 1. $\pi _ { i } \in \{ u _ { i } , v _ { i } \}$ 2. Either (i) $d e p ( \pi _ { i } ) = \pi _ { i - 1 }$ or (ii) $d e p ( \pi _ { i } ) = \pi _ { i - 1 } ^ { \prime }$ and $d e p ^ { 2 } ( \pi _ { i } ) = \pi _ { i - 1 }$ where π′ $\pi _ { i - 1 } ^ { \prime } = { \left\{ \begin{array} { l l } { u _ { i - 1 } } & { { \mathrm { i f ~ } } \pi _ { i - 1 } = v _ { i - 1 } } \\ { v _ { i - 1 } } & { { \mathrm { i f ~ } } \pi _ { i - 1 } = u _ { i - 1 } } \end{array} \right. }$ We apply induction over $i > 1$ . It follows from the induction hypothesis that $d e p s ( \pi _ { i - 1 } )$ doesn’t contain any $u _ { j } , v _ { j }$ with $j \geq i - 1$ . Suppose w.l.o.g. that $\pi _ { i - 1 } = u _ { i - 1 }$ . Note firstly that either $u _ { i - 1 } \in \dot { d e p s ( v _ { i - 1 } ) }$ or $u _ { i - 1 } \in d e p s ( u _ { i } )$ . For, if this would not hold then, by definition of dep, $d e p ( u _ { i - 1 } ) = d e p ( v _ { i - 1 } ) = d e p ( u _ { i } )$ , which by well-foundedness leaves no possibility for $d e p ( v _ { i } )$ . Since every connected component of $d e p ^ { - 1 } ( u _ { i - 1 } )$ is connected to $u _ { i - 1 }$ it follows that $u _ { i - 1 } =$ $d e p ( u _ { i } )$ or $\mathbf { \bar { \boldsymbol { u } } } _ { i - 1 } = d e p ( \boldsymbol { v } _ { i - 1 } )$ . In the first case we let $\pi _ { i } = u _ { i }$ . Suppose then that $u _ { i - 1 } = d e p ( v _ { i - 1 } )$ , then either $u _ { i - 1 } = d e p ( v _ { i } )$ or $v _ { i - 1 } \in \mathsf { \Gamma } d e p s ( v _ { i } )$ . In the second case, by well-foundedness and the connectedness of $d e p ^ { - 1 } ( v _ { i - 1 } )$ to $v _ { i - 1 }$ it follows that $v _ { i - 1 } = d e p ( v _ { i } )$ . In both cases we let $\pi _ { i } = v _ { i }$ , completing the induction. Since $| d e p s ( \pi _ { n } ) | \geq n - 1$ the lower bound follows. Lemma D.7. Let $( F , u )$ be a rooted graphs and $N > 0$ . Then there exists a number $M$ such that, for all pointed graphs $( G , v )$ of degree at most $N$ $\ J , h o m ( ( F , u ) , ( G , v ) ) \leq M$ . Proof. let $r$ be the maximal distance from $u$ to any other node of $F$ . The radius- $\cdot r$ neighborhood of $v$ in $G$ contains at most $( N + 1 ) ^ { r }$ nodes. It follows that there can be at most $M = \bar { ( } ( N + 1 ) ^ { r } ) ^ { n }$ homomorphisms from $( F , u )$ to $( G , v )$ , where $n$ is the number of nodes of $F$ . □ Lemma D.8. ReLU-FFNNs can multiply small natural numbers. That is, for each $N > 0$ and $k > 0$ , there is $a$ ReLU-FFNN with input dimension $k$ and output dimension 1 that, on input $( x _ { 1 } , \ldots , x _ { k } )$ with $0 \leq x _ { i } \leq N$ , outputs $\Pi _ { i } x _ { i }$ . Proof. This is well-known (and holds not only for ReLU but also for other common non-linear activation functions). For the sake of completeness we sketch a proof. First, for a fixed $m$ , we can test with a ReLU-FFNN, for a given natural number $x$ , whether $x \geq m$ . Indeed, $\mathrm { R e L U } ( 1 { \mathrm { - R e L U } } ( m - x _ { i } ) )$ is 1 if this holds and 0 otherwise. Furthermore, the Boolean operators of conjunction and negation can be implemented by ReLU-FFNNs as well (cf. the proof of Proposition 2.3). It follows that the function $$ f _ { ( m _ { 1 } , \ldots , m _ { k } ) } ( x _ { 1 } , \ldots , x _ { k } ) = { \left\{ \begin{array} { l l } { 1 } & { { \mathrm { i f } } x _ { i } = m _ { i } { \mathrm { f o r } } { \mathrm { a l l } } i \leq k } \\ { 0 } & { { \mathrm { o t h e r w i s e } } } \end{array} \right. } $$ can also be implemented by a ReLU-FFNN. Using this, we can represent the product $\Pi _ { i } x _ { i }$ by the linear expression $$ \sum _ { 0 \leq m _ { 1 } , \ldots , m _ { k } \leq N } \left( \Pi _ { i } m _ { i } \cdot f _ { ( m _ { 1 } , \ldots , m _ { k } ) } ( x _ { 1 } , \ldots , x _ { k } ) \right) $$ Note that the $\Pi _ { i } m _ { i }$ factors are viewed as constant integer coefficient here. Proposition 5.6. For all rooted graphs $( G , v )$ with $G = ( V , E , \mathrm { l a b } )$ , 1. tree-widt $\ i ( G ) - 1 \leq e g o - r a n k ( G , v ) \leq | V |$ . 2. ego-rank $\boldsymbol { \cdot } ( G , v ) = 0$ if and only if $G$ is acyclic. 3. ego-ran $\begin{array} { r } { \hat { \boldsymbol { \mathfrak { c } } } ( G , v ) = 1 } \end{array}$ whenever $( G , v )$ is $c$ -acyclic. Proof. For the first part of the first claim, let dep be a function witnessing that $( G , v )$ has ego-rank $k$ . We define a tree decomposition as follows: • The nodes of the tree decomposition are (i) all nodes $w$ of the graph $G$ , and (ii) all edges $( w , v )$ of the graph $G$ satisfying $\bar { d e p ( } w ) = d e p ( v )$ . The bag associated to each $w$ is $\{ w \} \cup d e p s ( w )$ and the bag associated to each edge $( w , v )$ is $\{ w , v \} \cup d e p s ( w )$ . • The edges of the tree decomposition are pairs where one node is a node of $G$ and the other is an edge of $G$ in which the node participates. Note that, in this way, every edge of $G$ is indeed contained in a bag. Furthermore, the third condition in the definition of dependency functions guarantees that this tree decomposition is indeed a tree. Consider any path in the tree decompositions of the form $$ w _ { 1 } \left( w _ { 1 } , w _ { 2 } \right) w _ { 2 } \left( w _ { 2 } , w _ { 3 } \right) \ldots \left( w _ { n - 1 } , w _ { n } \right) w _ { n } $$ where $d e p ( w _ { i } ) = d e p ( w _ { j } )$ for all $i < j < n$ , and hence $w _ { i } \neq w _ { j }$ for all $i < j < n$ (because the subgraph induced by $d e p ^ { - 1 } ( d e p ( w _ { i } ) )$ is acyclic). Suppose, now, that some graph vertex $x$ belongs to the bag of $w _ { 1 }$ as well as to the bag of $w _ { n }$ . In this case, $x$ must belong to $d e p s ( w _ { 1 } )$ and to $d e p s \bar { ( } w _ { n } )$ and hence it belongs to the bag of each node on the path. A similar argument applies for paths that start or end with an edge. This shows that the constructed tree decomposition is indeed a valid tree-decomposition. Moreover, the maximal bag size is $k + 2$ . Therefore, the tree-width of $G$ is at most $k + 1$ . For the second part of the first claim, it suffices to choose an arbitrary enumeration $v _ { 1 } , v _ { 2 } , \ldots v _ { n }$ of the nodes of $G$ , where $\boldsymbol { v } _ { 1 } = \boldsymbol { v }$ , and set $d e p ( v _ { 1 } ) = \perp$ and $d e p ( v _ { i + 1 } ) \stackrel { \cdot } { = } v _ { i }$ . The second claim follows immediately from the definition of ego-rank. For the third claim, it suffices to take $d e p ( v ) = \perp$ and $d e p ( v ^ { \prime } ) = v$ for all $\boldsymbol { v ^ { \prime } } \ne \boldsymbol { v }$ . The next theorem is not stated in the body of the paper, but it’s a special case of Theorem 5.8(1) below, and it serves as a warming up towards the proof of Theorem 5.8. Theorem D.9. Let $\mathfrak { F }$ be any finite set of acyclic rooted graphs. There is a $( | P | , | P | + | \mathfrak { F } | )$ -GNN $\mathcal { A }$ such that, for all pointed graphs $( G , v )$ , $\operatorname { r u n } _ { A } ( G ) ( v ) = \operatorname { e m b } _ { G } ( v ) \oplus \operatorname { h o m } ( \mathfrak { F } , ( G , v ) )$ . The GNN in question uses multiplication in the combination function. Proof. In what follows, we will refer to acyclic rooted graphs $( F , u )$ also as trees, where we think of $u$ as the root of the tree. By an immediate subtree of a tree $( F , u )$ we will mean a rooted graph $( F ^ { \prime } , u ^ { \prime } )$ where $u ^ { \prime }$ is a neighbor of $u$ and where $F ^ { \prime }$ is the induced subgraph of $F$ consisting of all nodes whose shortest path to $u$ contains $u ^ { \prime }$ . We will write $( F , u ) \Rightarrow ( F ^ { \bar { \prime } } , \bar { u ^ { \prime } } )$ to indicate that $\bar { ( } F ^ { \prime } , u ^ { \prime } )$ is an immediate subtree of $( F , u )$ . By the depth of a tree $( F , u )$ we will mean the maximum, over all nodes $u ^ { \prime }$ of $F$ , of the distance from $u$ to $u ^ { \prime }$ . Note that $( F , u )$ has depth zero if and only if it has no immediate subtrees, and that the depth of an immediate subtree of $( F , u )$ is always strictly smaller than the depth of $( F , u )$ . We may assume without loss of generality that $\mathfrak { F }$ is closed under taking immediate subtrees. The general case then follows by adding one additional layer on top that projects the resulting embedding vectors to a subset of $\mathfrak { F }$ . Let $| \mathfrak { F } | = \{ ( F _ { 1 } , u _ { 1 } ) , \dotsc , ( F _ { k } , u _ { k } ) \}$ and let $L$ be the maximal depth of a rooted graph in $\mathfrak { F }$ . Let $$ \boldsymbol { \mathcal { A } } = ( ( \mathrm { C O M } _ { i } ) _ { i = 1 \dots L } , \left( \mathrm { A G G } _ { i } \right) _ { i = 1 \dots L } ) $$ where • $\mathbf { C O M } _ { 1 } : \mathbb { R } ^ { 2 | P | } \mathbb { R } ^ { | P | + k }$ given by $\mathrm { C O M } _ { 1 } ( x _ { 1 } , \dots , x _ { | P | } , z _ { 1 } , \dots , z _ { | P | } ) = ( y _ { 1 } , \dots , y _ { | P | + k } ) $ with $y _ { i } = x _ { i }$ for $i \leq | P |$ , and with $y _ { | P | + i } = { \left\{ \begin{array} { l l } { 1 } & { { \mathrm { i f ~ } } F _ { i } { \mathrm { ~ h a s ~ d e p t h ~ 0 ~ a n d ~ } } x _ { j } = 1 { \mathrm { ~ f o r ~ e a c h ~ } } p _ { j } \in \mathrm { l a b } ^ { F _ { i } } ( u _ { i } ) } \\ { 0 } & { { \mathrm { o t h e r w i s e } } } \end{array} \right. }$ • For $i > 1$ , ${ \bf C O M } _ { i } : \mathbb { R } ^ { 2 ( | P | + k ) } \mathbb { R } ^ { | P | + k }$ given by $\mathrm { C O M } _ { i } ( x _ { 1 } , \dots , x _ { | P | + k } , z _ { 1 } , \dots , z _ { | P | + k } ) =$ $\left( y _ { 1 } , \dots , y _ { | P | + k } \right)$ with $y _ { i } = x _ { i }$ for $i \leq | P |$ , and with $$ y _ { | P | + i } = \left\{ \begin{array} { l l } { \displaystyle { \prod _ { F _ { j } \ : \mathrm { w i t h } \ : F _ { i } \Rightarrow F _ { j } } \left( z _ { | P | + j } \right) } } & { \mathrm { i f } \ : x _ { \ell } = 1 \ : \mathrm { f o r } \ : \mathrm { e a c h } \ : p _ { \ell } \in \mathrm { l a b } ^ { F _ { i } } ( u _ { i } ) } \\ { 0 } & { \mathrm { o t h e r w i s e } } \end{array} \right. $$ • Each $\operatorname { A G G } _ { i }$ is (pointwise) sum. It is not difficult to see that $\mathrm { C O M } _ { i }$ can be implemented by a FFNN using ReLU and multiplication. It follows from the above construction, and by induction on $d$ , that, for all $d \geq 0$ and for all $( F _ { j } , u _ { j } ) \in \mathfrak { F }$ of depth $d$ , $\mathsf { e m b } _ { G } ^ { i } ( v ) ( | P | + j ) = \mathrm { h o m } ( ( F _ { j } , u _ { j } ) , ( G , v ) )$ for $i > d$ . Furthermore, it is immediately clear from the construction that $\mathrm { e m b } _ { G } ^ { i } ( v ) ( j ) = \mathrm { e m b } _ { G } ( v ) ( j )$ for all $j \leq | P |$ . □ Theorem 5.8. Let $\mathfrak { F }$ be any finite set of rooted graphs, let $d = \operatorname* { m a x } \{ e g o - r a n k ( F , u ) \mid ( F , u ) \in \mathfrak { F } \}$ . 1. Then there is a $( | P | , | P | + | \mathfrak { F } | )$ -HE-GNN $\mathcal { A }$ of nesting depth $d$ such that, for all pointed graphs $( G , v )$ , $\operatorname { r u n } _ { A } ( G ) ( v ) = \operatorname { e m b } _ { G } ( v ) \oplus \operatorname { h o m } ( \mathfrak { F } , ( G , v ) )$ . The HE-GNN uses multiplication in the combination functions. 2. For each $N > 0 ;$ , there is a $( | P | , | P | + | \mathfrak { F } | )$ -HE-GNN $\mathcal { A }$ of nesting depth $d$ such that, for pointed graphs $( G , v )$ of degree at most $N$ , $\operatorname { r u n } _ { A } ( G ) ( v ) = \operatorname { e m b } _ { G } ( v ) \oplus \operatorname { h o m } ( \mathfrak { F } , ( G , v ) )$ . The HE-GNN uses only Sum as aggregation and ReLu-FFNNs as combination functions. Proof. We prove the first statement. The proof of the second statement is identical, except that we can replace the use of multiplication by a ReLU-FFNN due to the fact that the numbers being multiplied are bounded by a constant (cf. Lemma D.7 and Lemma D.8). We may assume that $\mathfrak { F }$ consists of a single rooted graph $( F , u )$ . Let dep be the dependency function witnessing the fact that $( F , u )$ has ego-rank $d$ . By Lemma D.5, we may assume that $d e p$ is wellfounded, and that for each node $w$ , if $\bar { d e p } ^ { - 1 } ( w )$ is non-empty, then each connected component of the subgraph induced by $d e p ^ { - 1 } ( w )$ contains a neighbor (in the original graph $F$ ) of $w$ . By the dependency depth of a node $w$ of $F$ , we will mean the largest number $\ell$ for which it is the case that $\begin{array} { r } { \dot { \boldsymbol { w } } = d e \dot { p ^ { \ell } } ( \boldsymbol { w } ^ { \prime } ) } \end{array}$ for some $w ^ { \prime }$ . Note that this is a finite number. For a node $w$ , we will denote by $F _ { w }$ the subgraph of $F$ induced by the set of nodes $d e p s ^ { - 1 } ( w ) \cup$ $\{ w \} \cup d e p s ( w )$ , where $d e p s ^ { - 1 } ( \boldsymbol { w } )$ is the set of nodes $w ^ { \prime }$ for which $w \in d e p s ( w )$ . We will now prove the following claim: $( ^ { * } )$ For each node $w$ of dependency depth $\ell \geq 1$ with $d e p s ( w ) = \{ w _ { 1 } , \dots , w _ { k } \}$ , there is a $( | P | +$ $k + 1 , 1 )$ -HE-GNN $\mathcal { A } _ { w }$ of nesting depth $\ell - 1$ such that for all graphs $G$ with vertices $V _ { G }$ and maps $h : \{ w \} \cup d e p s ( w ) V _ { G }$ , $$ \begin{array} { r l } & { \qquad \mathrm { r u n } _ { A } ( G , \mathrm { e m b } _ { G } ^ { + h } ) ( h ( w ) ) = \mathrm { h o m } _ { h } ( F _ { w } , G ) } \\ & { : \mathrm { e m } \mathfrak { b } _ { G } ^ { + h } = \{ v ^ { \prime } : \mathrm { e m b } _ { G } ( v ^ { \prime } ) \oplus \langle \delta _ { v ^ { \prime } h ( w _ { 1 } ) } , \dots , \delta _ { v ^ { \prime } h ( w _ { k } ) } , \delta _ { v ^ { \prime } h ( w ) } \rangle \mid v ^ { \prime } \in V _ { G } \} . } \end{array} $$ Figure 5: Decomposition of the subgraph $F _ { w }$ induced by $d e p s ( w ) \cup \{ w \} \cup d e p s ^ { - 1 } ( w )$ when $w$ has dependency depth greater than 1. The gray circles are disjoint. In addition to the edges drawn in the picture, there may also be additional edges connecting nodes in $\boldsymbol { d e p s ^ { - 1 } } ( \boldsymbol { w } )$ to nodes in $\{ w \} \cup d e p s ( w )$ . We omitted such edges from the drawing in order to not clutter the picture. Recall that $\mathrm { h o m } _ { h } ( F _ { w } , G )$ denotes the number of homomorphisms from $F _ { w }$ to $G$ extending the partial function $h$ . Also, note that the embedding vectors of $\mathrm { e m b } _ { G } ^ { + h }$ , by construction, include features that uniquely mark the $h$ -image of each $w ^ { \prime } \in d e p s ( w )$ as well as $w$ . We will first prove $( ^ { * } )$ by induction on the dependency depth $\ell \geq 1$ of $w$ , and then show that it implies the main statement of our theorem. The subgraph $F _ { w }$ can be decomposed as described in Figure 5. By Lemma D.5, each connected component of the subgraph induced by $d e p ^ { - 1 } ( w )$ of $F$ includes at least one neighbor of $w$ . Let $( \boldsymbol { F ^ { \prime } } , \boldsymbol { \dot { w } } )$ be the rooted tree that consists of the subgraph induced by $\{ w \} \cup d e p ^ { - 1 } ( w )$ , after removing, for each connected component of $d e p ^ { - 1 } ( w )$ , all but one (arbitrarily chosen) connecting edge to $w$ . Note that, as explained in the caption of Figure 5, there may be more than one edge between $w$ and a given connected component of $\bar { d e p } ^ { - 1 } ( w )$ , but we only keep one in order to ensure that $F ^ { \prime }$ is a tree rooted at $w$ . We refer to the edges connecting $w$ to nodes in $\bar { d } e p ^ { - 1 } ( w )$ that we did not add, as well as edges connecting nodes in $d e p s ( w )$ to nodes in $d e p ^ { - 1 } ( w )$ as “back-edges”. Now, by construction, for every function $h : \{ w \} \cup d e p s ( w ) V _ { G }$ , we have that $$ \mathrm { h o m } _ { h } ( F _ { w } , G ) = \sum _ { \mathrm { h o m o m o r p h i s m s ~ } f : ( F ^ { \prime } , w ) \to ( G , h ( w ) ) } \prod _ { \substack { w _ { i } \in d e p ^ { - 1 } ( w ) } } \mathrm { h o m } _ { h \cup \{ ( w _ { i } , f ( w _ { i } ) ) \} } ( F _ { w _ { i } } , G ) $$ Note that the fact that we omitted the back-edges from $F ^ { \prime }$ does not impact the above equation. Indeed, if a function $f : ( F ^ { \prime } , w ) \to ( G , h ( w ) )$ fails to preserve a back-edge it will simply not extend to a homomorphism from $F _ { w _ { i } }$ to $G$ , and hence will not contribute to the above sum. Now if $\ell = 1$ , $\hom _ { h \cup \{ w _ { i } , f ( w _ { i } ) \} } ( F _ { w _ { i } } , G )$ is either 1 or 0 since every vertex of $F _ { w _ { i } }$ is in the domain of $h \cup \{ ( w _ { i } , f ( w _ { i } ) )$ . Since all nodes in the range of this function are uniquely marked, this can be computed by a $( | \dot { P } | + k + 1 , 1 )$ - GNN $\mathcal { C } _ { w _ { i } }$ , i.e. a HE-GNN of nesting depth 0. More precisely, for each $v ^ { \prime } \in G$ : $$ \operatorname { r u n } _ { \mathcal { C } _ { w _ { i } } } ( G , \operatorname { e m b } _ { G } ^ { + h \cup \{ ( w _ { i } , v ^ { \prime } ) \} } ) ( v ^ { \prime } ) = h o m _ { h \cup \{ ( w _ { i } , v ^ { \prime } ) \} } ( F _ { w _ { i } } , G ) $$ These can be combined using Lemma B.3 into a single GNN $\mathcal { C }$ computing their concatenation. We now add layers, exactly as in the proof of Theorem D.9 that run through the tree $( F ^ { \prime } , w )$ and aggregate the respective counts according to the above equation to compute $\mathrm { h o m } _ { h } ( F _ { w } , \bar { G } )$ . If $l > 1$ we can apply the induction hypothesis $( ^ { * } )$ to obtain a $( | P | + k + 2 , 1 )$ -HE-GNN $B _ { w _ { i } }$ of nesting depth at most $\ell - 2$ for each $w _ { i } \in d e p ^ { - 1 } ( w )$ , calculating the corresponding factor in the above equation. Now for each $\boldsymbol { v } ^ { \prime } \in G$ : $$ \begin{array} { r } { \mathrm { r u n } _ { \mathcal { B } _ { w _ { i } } } ( G , \mathrm { e m b } _ { G } ^ { + h \cup \{ ( w _ { i } , v ^ { \prime } ) \} } ) ( v ^ { \prime } ) = h o m _ { h \cup \{ ( w _ { i } , v ^ { \prime } ) \} } ( F _ { w _ { i } } , G ) } \end{array} $$ These can be combined using Lemma B.4 into a single HE-GNN $( \boldsymbol { B } , \boldsymbol { C } )$ of nesting depth $\ell - 2$ computing their concatenation. Now similarly to the case for $\ell = 1$ , we add layers to $\mathcal { C }$ that run through the tree $( \boldsymbol { F ^ { \prime } } , \boldsymbol { w } )$ and aggregate the respective counts according to the above equation and call the result $\scriptstyle { \mathcal { C } } ^ { \prime }$ . Let $\mathcal { T }$ be a trivial GNN. Then $\mathcal { A } = ( ( B , \mathcal { C } ^ { \prime } ) , \mathcal { T } )$ is a $( | P | + k + 1 , 1 )$ -HE-GNN of nesting depth $\ell - 1$ that computes $\mathrm { h o m } _ { h } ( F _ { w } , G )$ , concluding the proof by induction of $( ^ { * } )$ . Finally, we must show that $( ^ { * } )$ implies the main statement of our theorem. Let $( F ^ { \prime } , u )$ be the induced subgraph of $F$ with nodes in $d e p ^ { - 1 } ( \perp )$ . Then again: $$ \mathrm { h o m } ( ( F , u ) , ( G , v ) ) = \sum _ { \substack { \mathrm { h o m o m o r p h i s m s } f : ( F ^ { \prime } , u ) ( G , v ) } } \quad \ \prod _ { w _ { i } \in F ^ { \prime } } \mathrm { h o m } _ { \{ ( w _ { i } , f ( w _ { i } ) ) \} } ( F _ { w _ { i } } , G ) $$ The argument now is very similar as the one we used in the inductive step. Since each $w _ { i } \in F ^ { \prime }$ has dependency depth at most $d$ , by $( ^ { * } )$ and theorem D.9 we obtain for each $w _ { i } \in F ^ { \prime }$ , a $( | P | + 1 , 1 )$ -HEGNN $B _ { w _ { i } }$ of nesting depth $d - 1$ computing the corresponding factor in the above equation. These can be combined using Lemma B.4 into a single $( | P | \bar { + } 1 , | \bar { F ^ { \prime } } | )$ -HE-GNN $( \boldsymbol { B } , \boldsymbol { C } )$ of nesting depth $d - 1$ computing their concatenation. We again add layers to $\mathcal { C }$ as in the proof of Theorem D.9 that run through the tree $( F ^ { \prime } , u )$ and aggregate the respective counts according to the above equation, and call this GNN $\scriptstyle { { \mathcal { C } } ^ { \prime } }$ . Then for trivial GNN $\boldsymbol { \mathcal { T } }$ , It follows that $\mathcal { A } = ( ( \boldsymbol { B } , \boldsymbol { \mathcal { C } } ^ { \prime } ) , \mathcal { T } )$ is a $( | P | , 1 )$ -HE-GNN of nesting depth $d$ that computes $\hom ( ( F , u ) , ( G , v ) )$ . □ # 5.3. Higher order GNNs Proof. Let $\mathcal { A }$ be a HE-GNN with depth $d$ such that ${ \mathsf { c l s } } _ { \mathcal { A } } ( G , v ) \neq { \mathsf { c l s } } _ { \mathcal { A } } ( G ^ { \prime } , v ^ { \prime } )$ . By theorem 3.3 there exists a $\mathrm { G M L } ( \downarrow ^ { d } )$ -sentence $\phi$ such that ${ \mathsf { c l s } } _ { \phi } ( G , v ) \neq { \mathsf { c l s } } _ { \phi } ( G ^ { \prime } , v ^ { \prime } )$ . $\phi$ is equivalent to a sentence that uses at most $d$ variables $x _ { i }$ . Hence there exists an equivalent formula $\psi$ in the $d + 2$ -variable fragment of first-order logic with counting quantifiers (cf. Table 2). By theorem 5.9 there then exists a $( d + 2 )$ -GNN $\boldsymbol { B }$ such that ${ \mathsf { c l s } } _ { \boldsymbol { B } } ( G , v ) \neq { \mathsf { c l s } } _ { \boldsymbol { B } } ( G ^ { \prime } , v ^ { \prime } )$ . □ We apply a lemma by Morris et al. [25], which was slightly adjusted by Qian et al. [29]. Definition D.10. For a graph $G = ( V , E , \mathrm { l a b } )$ , $S \subseteq V$ forms a distance 2 clique $i f$ any two nodes in $S$ are connected by a path of length 2. We say $S$ is colorful if for all $v , v ^ { \prime } \in S$ , $\displaystyle \mathsf { l a b } ( v ) = \mathsf { l a b } ( v ^ { \prime } )$ iff $\boldsymbol { v } = \boldsymbol { v } ^ { \prime }$ . Lemma D.11. For $d \in \mathbb { N }$ there exist graphs $G _ { d } , H _ { d }$ such that: 1. $G _ { d } \equiv _ { ( d - 1 ) }$ -WL $H _ { d }$ 2. There exists a colorful distance 2 clique of size $d + 1$ in $G _ { d }$ . 3. There does not exist a colorful distance 2 clique of size $d + 1$ in $H _ { d }$ . Table 2: Translation from a $\mathbf { G M L } ( \downarrow )$ -formula $\phi$ containing variables $z _ { 1 } , \ldots , z _ { k }$ to a ${ \mathsf { C } } ^ { k + 2 }$ -formula $t r _ { x } ( \phi )$ containing variables $x , y , z _ { 1 } , \ldots , z _ { k }$ . $$ { \begin{array} { l l l l l l } { t r _ { x } ( p _ { i } ) } ^ { \sim } \ } & { = \ } & { P _ { i } ( x ) } & { t r _ { y } ( p _ { i } ) } & { = \ P _ { i } ( x ) } \\ { t r _ { x } ( z _ { i } ) } & { = \ x = z _ { i } } & { t r _ { y } ( z _ { i } ) } & { = \ y = z _ { i } } \\ { t r _ { x } ( \phi \wedge \psi ) } & { = \ t r _ { x } ( \phi ) \wedge t r _ { x } ( \psi ) } & { t r _ { y } ( \phi \wedge \psi ) } & { = \ t r _ { y } ( \phi ) \wedge t r _ { y } ( \psi ) } \\ { t r _ { x } ( \neg \phi ) } & { = \ } & { - t r _ { x } ( \phi ) } & { t r _ { y } ( \neg \phi ) } & { = \ } & { - t r _ { y } ( \phi ) } \\ { t r _ { x } ( \diamond \Im ^ { 2 n } \phi ) } & { = \ } & { \exists z ^ { n } y ( R x y \wedge t r _ { y } ( \phi ) ) } & { t r _ { y } ( \diamond \Im ^ { 2 n } \phi ) } & { = \ } & { \exists z ^ { n } x ( R y x \wedge t r _ { x } ( \phi ) ) } \\ { t r _ { x } ( \downarrow z _ { i } . \phi ) } & { = \ } & { \exists z _ { i } ( z _ { i } = x \wedge t r _ { x } ( \phi ) } & { t r _ { y } ( \downarrow z _ { i } . \phi ) } & { = \ } & { \exists z _ { i } ( z _ { i } = y \wedge t r _ { y } ( \phi ) ) } \end{array} $$ Theorem 5.11. For $d \geq 0$ , HES-GNN- $^ { ( d , 3 ) }$ can distinguish pointed graphs that cannot be distinguished by $d \mathrm { - } \mathrm { W L } ,$ or equivalently, by a $( d + 1 )$ -GNN. Proof. We show there exist formula $\phi \in { \bf G M L } ( \downarrow _ { W ^ { 3 } } ^ { d } )$ and node $v$ in $G _ { d + 1 }$ such that $G _ { d + 1 } , v \models \phi$ but $H _ { d + 1 } , w \not \ = \not \ = e$ for all $w$ in $H _ { d + 1 }$ . Let $v$ be in a distance 2 colorful clique $S$ of size $d + 2$ , and let $\alpha _ { 1 } \ldots \alpha _ { d + 2 }$ be conjunctions of literals matching the distinct labelings of nodes in $S$ . Thus $v , v ^ { \prime }$ only satisfy the same $\alpha _ { i }$ if $\displaystyle \mathrm { l a b } ( v ) = \mathrm { l a b } ( v ^ { \prime } )$ . Further let $G _ { d + 1 } , v \models \alpha _ { 1 }$ . We let: $$ \phi = \downarrow x _ { 1 } . W ^ { 3 } ( \diamondsuit \downarrow x _ { 2 } . W ^ { 3 } ( \diamondsuit \downarrow x _ { 3 } . W ^ { 3 } ( . . . , \downarrow x _ { d } . W ^ { 3 } ( \xi \wedge \psi ) . . . ) ) ) $$ $\xi$ ensures that all $x _ { i }$ have distinct values matching labeling as specified by $\alpha _ { 1 } , \ldots . \alpha _ { d }$ , and form a colorful distance 2 clique of size $d$ . $$ \xi = \bigwedge _ { 1 \leq i \leq d } ( \mathbb { Q } _ { x _ { i } } ( \alpha _ { i } \wedge \bigwedge _ { 1 \leq j \leq d , j \neq i } ( \diamondsuit \diamond x _ { j } ) ) ) $$ $\psi$ ensures that there are two more connected vertices with labelings matching $\alpha _ { d + 1 } , \alpha _ { d + 2 }$ that have edges to all the $x _ { i }$ : $$ \psi = \diamondsuit ( \alpha _ { d + 1 } \wedge \bigwedge _ { 1 \leq i \leq d } ( \diamondsuit x _ { i } ) \wedge \diamondsuit ( \alpha _ { d + 2 } \wedge \bigwedge _ { 1 \leq i \leq d } ( \diamondsuit x _ { i } ) ) $$ Now $G _ { d + 1 } , v \models \phi$ and any node in $H _ { d + 1 }$ satisfying $\phi$ would be in a distance 2 colorful clique. Hence for all v′ in Hd+1: $$ G _ { d + 1 } , v \not \equiv _ { \mathrm { G M L } ( \downarrow _ { W ^ { 2 } } ^ { d } ) } H _ { d + 1 } , v ^ { \prime } $$ We apply the logical characterization of HES-GNN (Theorem 4.1), the WL characterization of higher order GNNs and the fact that $G _ { d + 1 } \equiv _ { ( d - 1 ) }$ -WL $H _ { d + 1 }$ to obtain the theorem: $$ \begin{array} { r } { \rho ( ( d + 1 ) \mathbf { - G N N } ) = \rho ( d \mathbf { - W L } ) \subsetneq \rho ( \mathbf { G M L } ( \downarrow _ { W ^ { 3 } } ^ { d } ) ) = \rho ( \mathbf { H E S - G N N - } ( d , 3 ) ) } \end{array} $$
We propose and study Hierarchical Ego Graph Neural Networks (HEGNNs), an expressive extension of graph neural networks (GNNs) with hierarchical node individualization, inspired by the Individualization-Refinement paradigm for graph isomorphism testing. HEGNNs generalize subgraph-GNNs and form a hierarchy of increasingly expressive models that, in the limit, can distinguish graphs up to isomorphism. We provide a logical characterization of HEGNN node classifiers, with and without subgraph restrictions, using graded hybrid logic. This characterization enables us to relate the separating power of HEGNNs to that of higher-order GNNs, GNNs enriched with local homomorphism count features, and color refinement algorithms based on Individualization-Refinement. Our experimental results confirm the practical feasibility of HEGNNs and show benefits in comparison with traditional GNN architectures, both with and without local homomorphism count features.
[ "cs.LG", "cs.AI", "cs.LO", "I.2.6; F.2.0" ]
# 1 Introduction Transformer-based LLMs with long-context capabilities have significantly enhanced real-world applications, including long-document analysis and personalized conversational agents [1, 19, 46]. However, increasing context lengths substantially raises both memory consumption for KV caching and computational costs associated with attention mechanisms [28]. For example, caching 120K tokens in Qwen2.5-14B with FP16 precision requires approximately 33 GB memory, surpassing the model’s 28 GB parameter storage at equivalent precision [51]. Recent approaches primarily target reducing KV cache memory size while preserving inference accuracy. These methods include merging the attention heads [3], compressing KV pairs into shorter sequences [43], and using sliding-window techniques to limit context windows [22, 49, 50]. Other studies exploit attention sparsity for dynamic KV eviction during decoding [4, 35, 57] and prefill stages [6, 30]. Existing eviction methods typically employ query-aware KV-pair importance scoring computed online during inference [6, 30, 57], selectively retaining KV pairs most relevant to immediate queries (Figure 1a,b). While effective in single-query scenarios, these methods exhibit significant performance degradation in multi-query settings, as the retained KV pairs predominantly overfit to initial queries [32]. We elaborate on these limitations in Section 2.2. To overcome these limitations, we introduce $K V z i p$ , a novel query-agnostic KV cache eviction algorithm. KVzip optimizes a reusable compressed KV cache for a given context, enabling efficient inference across diverse future queries (Figure 1c). Our approach particularly benefits scenarios where KV caches are prepared offline, such as personalized conversational agents retaining user profiles, instructions, and dialogue histories [8, 31], or enterprise systems utilizing precomputed document KV caches for retrieval [7]. # (a) Query-aware KV eviction prefill CTX Q1 + evict KV1 decode A1 · : · CTX Qn KVn An ✗ Repetitive prefill. $\checkmark$ Good performance. # (b) Reusing query-dependent cache prefill + evict decode CTX Q1 → KV1 A1 Q2 A2 ✓ One-time prefill. Qn An ✗ Low performance. prefill Q1 decode A1 + evict CTX KV Qn An ✓ One-time prefill. ✓ Good performance. Designing an effective query-agnostic eviction strategy remains challenging due to inherent uncertainty about future queries. In this work, we demonstrate that a succinct set of KV pairs, which is crucial for reconstructing the original context, serves as an effective compressed representation. KVzip leverages the insight that a Transformer naturally functions as an encoder-decoder architecture by encoding context into KV pairs, analogous to traditional compression methods such as Zip [25]. Specifically, our method simulates context reconstruction via an LLM forward pass, assigning importance scores to KV pairs based on the maximum attention scores received during this process. This compression principle parallels self-supervised learning approaches that emphasize input reconstruction, demonstrating robust generalization across diverse downstream tasks [14, 20, 42]. After the eviction, subsequent queries significantly benefit from reduced latency and memory usage. Specifically, KVzip achieves approximately $2 \times$ latency reduction in FlashAttention [13] and $3 \mathrm { - } 4 \times$ reduction in KV cache size during decoding with negligible performance loss on diverse queries. KVzip supports both context-dependent eviction, which achieves higher compression ratios but incurs per-context compression overhead [15], and context-independent eviction, which incurs no overhead after deployment while achieving moderate compression ratios [50]. Section 4 empirically demonstrates KVzip’s robustness and effectiveness on multiple benchmarks—including document question-answering, mathematical reasoning, retrieval, and code comprehension tasks—with contexts up to 170K tokens. Unlike existing eviction methods which show significant performance degradation even at $10 \%$ KV eviction in multi-query settings [30, 57], KVzip consistently maintains inference accuracy even when evicting up to $70 \%$ of the KV cache. Experiments encompass 12 benchmark datasets, including SQuAD [44], GSM8K [12], and SCBench [32], and involve various models such as LLaMA3.1 [19], Gemma3 [46], and Qwen2.5 [51], ranging from 3B to 14B parameters. Furthermore, KVzip seamlessly integrates with existing optimizations such as KV cache quantization [33] and structured head-level KV eviction [50]. Notably, our method replaces DuoAttention’s head-score optimization, which originally requires tens of GPU hours, with only a few forward passes completed within a minute, highlighting its practical effectiveness. # 2 Preliminary # 2.1 Notation and Problem Formulation Consider the text domain $\tau$ and an autoregressive Transformer-based LLM $f _ { \mathrm { L M } } : \mathcal { T } \mathcal { T }$ that generates sequences via greedy decoding [41, 47]. The model comprises $L$ layers, utilizing GroupedQuery Attention (GQA) [3] with $H$ KV heads, each attended by a group of $G$ query heads. During inference, $f _ { \mathrm { L M } }$ caches hidden representations as KV pairs to enhance computational efficiency [28]. Given an input context $c \in \mathcal { T }$ tokenized into $n _ { c }$ tokens, the prefill stage generates a cache containing $L \times H \times n _ { c } \mathbf { I }$ KV pairs, denoted as $\mathsf { K V } _ { c }$ [2]. Conditioned generation using the cache is denoted as $f _ { \mathrm { L M } } ( \cdot \mid \mathrm { K V } _ { c } )$ . Our objective is to derive a compact pruned cache $\mathbf { K } \mathbf { V } _ { c , \mathrm { e v i c t e d } } \subseteq \mathbf { K } \mathbf { V } _ { c }$ satisfying $$ f _ { \mathrm { L M } } ( q | \mathrm { K V } _ { c , \mathrm { e v i c t e d } } ) \approx f _ { \mathrm { L M } } ( q | \mathrm { K V } _ { c } ) , \forall q \in \mathcal { T } . $$ # 2.2 Analysis of Existing Approaches Existing KV eviction methods, such as SnapKV [30] and PyramidKV [6], compress KV caches based on information given during prefill. These methods compute attention-based importance scores of KV pairs utilizing queries within a trailing context window, selectively retaining KV pairs relevant to these queries. While effective for single-query benchmarks such as needle-in-a-haystack [24] and LongBench [5], these methods require repetitive cache prefills for each new query, as shown in Figure 1a. Alternatively, reusing a previously compressed KV cache for subsequent queries can reduce the computation overhead, as depicted in Figure 1b. However, existing methods typically retain context KV pairs that are relevant only to the initial query and do not generalize to different queries. Figure 2 illustrates this issue using the SQuAD multi-QA dataset [44]. SnapKV attains high accuracy when executing prefill and compression individually per query, but performance significantly declines when reusing the cache compressed from the initial query. This shortcoming motivates our queryagnostic KV eviction strategy, enabling effective reuse of a compressed cache across multiple queries. 100 Accuracy (%) 80 SnapKV-prefill 60 SnapKV-reuse 40 KVzip (ours) 0.2 0.4 0.6 0.8 1.0 KV cache budget ratio # 3 Method The primary objective of our algorithm is to assign an importance score to each KV pair, determining eviction priorities, following prior studies [57]. Given a context length $n _ { c }$ , KVzip assigns importance scores $\dot { S ^ { \ast } } \in \mathbb { R } ^ { L \times H \times n _ { c } }$ to KV pairs in $\mathbf { K V } _ { c }$ , subsequently evicting pairs with the lowest scores. Our method supports both non-uniform and uniform head budget allocations [15, 30]. KVzip further accommodates a head-level eviction strategy by computing head-level scores using the maximum pair-level scores across the sequence dimension, $n _ { c }$ [50]. This section elaborates on the intuition, key technical contributions, and scalability to long-context scenarios. # 3.1 Intuition To effectively answer arbitrary queries, the compressed cache $\mathbf { K } \mathbf { V } _ { c , \mathrm { e v i c t e d } }$ and $f _ { \mathrm { L M } }$ should retain complete contextual information. Our intuition is that we can verify this completeness by explicitly prompting $f _ { \mathrm { L M } }$ to reconstruct the previous context from $\mathsf { K V } _ { c }$ ,evicted (Figure 3). If $\mathrm { K V } _ { \mathrm { \it c , e v i c t e d } }$ enables $f _ { \mathrm { L M } }$ to accurately reconstruct the original context $c$ using the repeat prompt, we can re-prefill the original cache $\mathbf { K V } _ { c }$ and conduct accurate inference. Context KVc KVc,evicted decode prefill evict fLM fLM 个 Context Repeat prompt However, regenerating the original cache at each inference remains practically infeasible. Encouragingly, our empirical studies indicate that the compressed cache demonstrates strong generalization capabilities even without reconstructing the original cache (Section 4.2), empirically achieving Equation (1). This finding resonates with principles from reconstruction-based self-supervised learning, which demonstrates strong generalization across diverse downstream tasks [14, 20, 42]. KV importance Evict KV Responses Prefill KVc cMroesas-uartte nmtiaoxn Heads (LH) (wpiatihr-l/ohewasdc-loerveesl) KVc,evicted decode fLM H fLM > fLM 个 个 个 nc Context Repeat prompt $^ +$ Context Sequence (nc) Queries # 3.2 KV Importance Scoring KVzip quantifies KV pair importance based on their contribution in context reconstruction. Specifically, we simulate reconstruction through teacher-forced decoding [17], parallelized via a single forward pass with an input sequence comprising a repeat prompt followed by the original context (Figure 4). We define importance scores to be the maximum attention score each KV pair receives during this forward pass, leveraging the insight that KV pairs receiving minimal attention contribute little to Transformer computations [57]. Formally, given a context of length $n _ { c }$ , we construct an input sequence of length $n _ { \mathrm { i n } } = n _ { \mathrm { p r o m p t } } + n _ { c }$ by concatenating the repeat prompt of length $n _ { \mathrm { p r o m p t } }$ with the context. Forwarding this input through $f _ { \mathrm { L M } }$ with $\mathbf { K V } _ { c }$ generates $d$ -dimensional grouped-query features $Q _ { l , h } \in \mathbb { R } ^ { G \times n _ { \mathrm { i n } } \times d }$ and key features $K _ { l , h } \in$ $\mathbb { R } ^ { ( n _ { c } + n _ { \mathrm { i n } } ) \times \bar { d } }$ for the $h$ -th KV head in layer $l$ [3]. Grouped-attention between these features produces an attention matrix $A _ { l , h } = \mathrm { S o f t m a x } ( Q _ { l , h } K _ { l , h } ^ { \intercal } ) \in \mathbb { R } _ { + } ^ { G \times n _ { \mathrm { i n } } \times ( n _ { c } + n _ { \mathrm { i n } } ) }$ . Extracting entries corresponding to keys in $\mathbf { K V } _ { c }$ gives a sliced attention matrix $\bar { A } _ { l , h } \in \mathbb { R } _ { + } ^ { G \times n _ { \mathrm { i n } } \times n _ { c } }$ . Finally, we compute importance scores $S _ { l , h } \in \mathbb { R } ^ { n _ { c } }$ for the $h$ -th KV head in layer $l$ by taking the maximum over grouped queries as $$ S _ { l , h } = \operatorname* { m a x } _ { g = 1 , \ldots , G ; \ i = 1 , \ldots , n _ { \mathrm { i n } } } \bar { A } _ { l , h } [ g , i ] . $$ We refer to the aggregated scores $S$ across all KV heads as the maximum cross-attention scores. Figure 13 provides a visualization of these scores. # 3.3 Observation The cross-attention pattern from the repeated context onto the prefilled context exhibits significant sparsity, indicating substantial opportunities for compressing $\mathbf { K V } _ { c }$ . Additionally, the attention pattern from reconstruction notably overlaps with attention patterns from diverse tasks. Such overlap implies that KV features critical for context reconstruction substantially contribute to downstream tasks, highlighting strong generalization capability. Attention Sparsity in Reconstruction. Cross-attention patterns obtained during context reconstruction exhibit greater sparsity compared to self-attention patterns computed during the initial prefill of $\mathbf { K V } _ { c }$ (Figure 5). During prefill, the model densely interacts among tokens to encode comprehensive contextual information [39]. In reconstruction, however, the model efficiently leverages (1) high-level representations stored in $\mathsf { K V } _ { c }$ and (2) internal knowledge encoded within model weights, thus reducing unnecessary attention lookups. This cross-attention sparsity effectively identifies and removes redundant KV pairs, outperforming prior methods such as $\mathrm { H _ { 2 } O }$ [57] that rely on attention scores obtained during prefill (Section 4.2). Figure 5: Histogram comparing max attention scores received by KV pairs in $\mathbf { K V } _ { c }$ during prefill versus reconstruction stages, measured on SQuAD with LLaMA3.1-8B. Figure 6: Attention comparison across tasks. 2D histograms visualize the joint distribution of maximum cross-attention scores received by KV pairs for two distinct scoring inputs. Each input consists of a task query and the generated response (Table 2). Each cell at $( v , w )$ indicates the proportion (log-scale) of KV pairs in $\mathbf { K V } _ { c }$ receiving maximum attention of $v$ for the $\mathbf { X }$ -axis task and $w$ for the y-axis task. Bright colors in the lower-right triangular region denote KV pairs receiving higher attention from the $\mathbf { \boldsymbol { x } }$ -axis task than from the y-axis task. We compute scores using LLaMA3.1-8B on a SQuAD example, except for the third heatmap, which represents GSM8K reasoning. QA-1 and QA-2 denote distinct QA pairs. Figure 13 visualizes the attention patterns for each task. 0.468 0.468 0.468 1 0.468 100 1091 1092 1093 0.2 U 0.2 1094 0.0 S 0.0 0.0 0.0 1095 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Score (Repeat) Score (Repeat) Score (Repeat) Score (QA-1) Figure 7: Chunked scoring for the $i$ -th chunk in $\mathbf { K V } _ { c }$ . We compute attention scores by multiplying queries with subsampled keys of length $m + n _ { \mathrm { i n } }$ , followed by softmax normalization. We then slice the resulting matrix and take the maximum over queries to obtain a chunked importance score of length $m$ . We set the grouped-query size to $G = 1$ for clarity. This procedure repeats per chunk. For chunks with $i \geq 2$ , we formulate the repeat prompt as: “Repeat the previous context starting with ⟨last few tokens of preceding chunk⟩:”, consistently using the last 8 tokens across all experiments. Pseudo-code is provided in Appendix A. Attention Overlap Across Tasks. Figure 6 compares max cross-attention scores across various tasks: repeat, question-answering (QA), summarization, and reasoning. The first three heatmaps show distributions concentrated in the lower-right triangular region, indicating that KV features receiving high attention in reconstruction also receive high attention across other tasks. In contrast, the fourth heatmap, comparing two different QA tasks, shows a distinct distribution concentrated along both the $\mathbf { X } ^ { - }$ and y-axes, reflecting query-specific attention variability. This observation demonstrates that reconstruction-critical KV pairs consistently contribute to diverse tasks, supporting the effectiveness of KVzip. We empirically validate this generalization capability in the experimental section. # 3.4 Technical Challenge and Solution Our method concatenates a repeat prompt with context tokens, processing this input through $f _ { \mathrm { L M } }$ to obtain attention matrices. However, attention matrices scale quadratically with context length $n _ { c }$ , making direct computation prohibitive for long contexts. While fused attention kernels like FlashAttention reduce memory overhead by computing attention scores block-wise without storing full matrices [13], our method uniquely requires a maximization along the query dimension following Softmax normalization along the key dimension. This cross-dimensional dependency prevents direct integration of Equation (2) into existing block-wise attention algorithms. Chunked Scoring. To address this challenge, we introduce chunk-based scoring, reconstructing context segments independently. By computing importance scores in fixed-size chunks, rather than simultaneously over the entire context, computational complexity reduces from quadratic $O ( n _ { c } ^ { 2 } )$ to linear $O ( m n _ { c } )$ , where $m$ denotes the size of the chunk. Specifically, we partition the context tokens into fixed-length chunks of size $m$ , concatenate each chunk with the repeat prompt, and process the resulting input of length $n _ { \mathrm { i n } } = n _ { \mathrm { p r o m p t } } + m$ through $f _ { \mathrm { L M } }$ (Figure 7). For each Transformer layer, we subsample keys in $\mathsf { K V } _ { c }$ corresponding to each chunk, obtaining a smaller attention matrix of size $n _ { \mathrm { i n } } \times ( m + n _ { \mathrm { i n } } )$ . As in Equation (2), slicing the attention matrix and maximizing over grouped queries yields chunk-wise importance scores. We repeat the process for each chunk and aggregate the scores to obtain the full importance scores of $\mathbf { K V } _ { c }$ . We set the chunk size to $m = 2 \mathsf { K }$ , constant across context lengths, models, and tasks, as the size has negligible impact on performance (Appendix B.1). Complexity Analysis. Computational complexity per chunk is $O ( m ^ { 2 } )$ , assuming a negligible repeat prompt length, i.e., $n _ { \mathrm { p r o m p t } } \ll m$ , thus $n _ { \mathrm { i n } } \approx m$ . Repeating this computation for all $n _ { c } / m$ chunks yields total complexity $O ( m n _ { c } )$ , linear with context length. Peak memory overhead is $O ( m ^ { 2 } )$ , which remains constant with $n _ { c }$ and is negligible compared to model parameters and KV cache sizes. Additionally, we propose a softmax-free variant in Appendix B.2 utilizing a custom CUDA kernel integrated into FlashAttention, further reducing computational costs at a performance trade-off. Importance scoring introduces additional overhead from computing attention queries and keys for chunked inputs through $f _ { \mathrm { L M } }$ with $\mathsf { K V } _ { c }$ . Given $n _ { \mathrm { { i n } } } \approx m$ , FlashAttention incurs $O ( n _ { c } m + \dot { m } ^ { 2 } / 2 )$ causal-attention FLOPs per chunk, resulting in a total complexity of $O ( n _ { c } ^ { 2 } + n _ { c } m / 2 )$ across all $n _ { c } / m$ chunks. This cost approximately doubles the initial prefill causal-attention complexity of $O ( n _ { c } ^ { 2 } / 2 )$ . Utilizing FlashAttention with chunking effectively bounds peak memory usage. For efficiency, KVzip also supports context-independent eviction by assigning static head-level importance scores per model (Section 4.2–Figure 11), incurring no compression overhead after deployment. (a) Inference efficiency (decoding) Attention latency (ms) 0.4 0.34 0.39 150 13.1 16.3 24680 95.8 75.4 87. U 40 30.5 30.7 31.1 32.5 38.8 0.3 0.27 65. 71.9 Peak memory (GB) 30 0.22 9.8 0.2 0.17 6.5 20 0.1 10 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.5k 1k 2k 4k 8k 0.5k 1k 2k 4k 8k KV cache ratio KV cache ratio Repeat chunk size Repeat chunk size Empirical Efficiency Analysis. Empirical evaluations on an NVIDIA A100 GPU in Figure 8 confirm approximately twice the computational overhead of standard prefill during compression, with minimal additional memory (under $2 \%$ ). Importantly, compression occurs once per context or per model. Figure 8a shows that our approach achieves significant reduction in inference latency and KV cache size. Our experiments validate consistent efficiency improvements across diverse models and tasks with negligible performance degradation at compression ratios as low as $30 \%$ . # 4 Experiment # 4.1 Setup Eviction Structure. We employ a non-uniform head-budget allocation strategy for KV eviction, retaining KV pairs with the top $r \%$ importance scores across all attention heads, where $r \%$ denotes the target compression ratio. KV pairs of the initial system prompt remain intact. To ensure fairness, we apply the same non-uniform allocation to baseline methods, given its demonstrated superiority over uniform allocation [15]. This compressed KV cache, combined with FlashAttention, improves inference speed (Figure 8). Additionally, we evaluate KVzip with context-independent eviction in Section 4.2 and uniform-budget allocation in Appendix B.3. Evaluation. Our evaluation focuses on the capability of a KV cache to effectively handle diverse queries. Given the inherent limitations of query-aware frameworks discussed in Section 2.2, we adopt the query-agnostic framework from Figure 1c. Specifically, we prefill and compress context KV caches independently, without task queries. Existing eviction methods also support this independent prefilling of context [57, 30], enabling evaluation under the query-agnostic framework. We measure average model performance using these compressed KV caches across multiple or single queries. Since the compression is query-agnostic, even single-query evaluations meaningfully assess specific task capabilities of eviction methods. Unlike prior methods that evict KV pairs from replicated caches for grouped queries [30], we evict directly from the initially stored cache before replication, thus reducing the actual storage required for the KV cache. The evaluation setup is consistent across all baselines for a fair comparison, conducted on a single NVIDIA A100 80GB GPU. Baselines, Datasets, and Models. We benchmark against state-of-the-art KV cache eviction methods, including $\mathrm { H _ { 2 } O }$ [57], SnapKV [30], and PyramidKV [6]. We further compare DuoAttention [50] using head-level eviction for context-independent compression. Evaluations span diverse datasets: SQuAD [44], GSM8K [12], needle-in-a-haystack (NIAH) [24], and nine tasks from SCBench [32]. SCBench provides comprehensive multi-query evaluations, including tasks from RULER [21] and $\infty$ Bench [56]. Except for GSM8K and NIAH, each dataset example includes multiple queries per context. Context lengths range from 100 to 170K tokens, tokenized with the Qwen tokenizer [51], covering domains such as long-document QA, retrieval, mathematical reasoning, in-context learning, and code comprehension. Appendix A provides implementation details and dataset specifics. KVzip (ours) H O SnapKV PyramidKV NIAH Retr.KV Retr.Prefix-Suffix Code.RepoQA 100 50 60 1 80 60 40 ? 460 20 1230 20 20 0 0 0 0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 KV cache ratio KV cache ratio KV cache ratio KV cache ratio SQuAD GSM8K En.QA En.MultiChoice Contextual QA 100 80 Accuracy (%) 80 2460 40 ? 80 60 40 1230 40 60 0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 KV cache ratio KV cache ratio KV cache ratio KV cache ratio En.Summary Retr.MultiHop Math.Find ICL.ManyShot Redundancy 35 12340 30 230505 G 30 Accuracy (%) 20 25 10 20 0 0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 KV cache ratio KV cache ratio KV cache ratio KV cache ratio We conduct evaluations with various instruction-finetuned LLMs, including Qwen2.5-7B, Qwen2.5- 14B, LLaMA3.1-8B, and Gemma3-12B [51, 19, 46]. These models utilize GQA with group sizes varying from 4 (LLaMA3.1-8B) to 7 (Qwen2.5-7B). Gemma3 employs hybrid attention mechanisms, combining global and sliding window strategies [46]. All evaluations use Bfloat16 precision. We use greedy decoding with these models to generate responses. Furthermore, we integrate KVzip with the QServe quantization framework, adopting 8-bit weights, 8-bit activations, and 4-bit KV cache [33]. # 4.2 Benchmarking Task Generalization. Figure 9 presents multi-query evaluation results for Qwen2.5-7B across 12 benchmark datasets, grouped into three categories. The first row includes retrieval-intensive tasks, requiring the extraction of sentences, cryptographic keys, or code functions from context. Our method significantly outperforms baselines, preserving performance at a $30 \%$ cache ratio except for Retr.Prefix-Suffix, while baseline methods degrade notably at $90 \%$ retention. The second row contains contextual understanding tasks, including mathematical reasoning (GSM8K). Our method achieves near-lossless compression down to $20 \%$ , consistently outperforming baselines. In the last row, En.Summary requires high-level contextual information, whereas other tasks contain repetitive contextual information [32]. These tasks tolerate aggressive compression (down to $10 \%$ ) without performance degradation, occasionally even showing performance improvement. We hypothesize that this improvement results from reduced attention distractions following KV eviction [54]. Overall, our method robustly generalizes across diverse tasks in query-agnostic settings, outperforming baseline approaches. Model Scale and Architecture. Figure 10 shows performance across larger models (Qwen2.5- 14B), distinct model families (LLaMA3.1-8B), and hybrid attention architectures (Gemma3-12B). Gemma employs global and sliding-window attention layers in a 1:5 ratio [46]. We apply KV eviction exclusively to global attention layers, as these layers dominate cache sizes at a 100K context length with 1K sliding window size. To comprehensively compare methods, we average performances KVzip (ours) H2O SnapKV PyramidKV Qwen2.5-14B LLaMA3.1-8B Gemma3-12B LLaMA3-8B-W8A8KV4 Rel. performance 1.0 Rel. performance 1.0 0.8 Rel. performance 1.0 0.8 0.8 0.8 0.6 0.6 0.6 R 0.4 R 0.4 R 0.4 R 0.4 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 KV cache ratio KV cache ratio KV cache ratio (global) KV cache ratio over 12 benchmark tasks. Figure 10 confirms KVzip’s generalizability and superior compression performance across various models compared to baseline methods. KV Quantization. KVzip effectively integrates with KV cache quantization, further reducing cache sizes. Figure 10 evaluates KV eviction methods on a 4-bit KV quantized model (LLaMA3- 8B-W8A8KV4) from QServe [33]. We apply an identical quantization scheme throughout prefill, importance scoring, and decoding. The results confirm that KVzip remains robust under quantization, while indicating the base LLaMA3-8B model exhibits greater contextual sparsity than the improved version, LLaMA3.1-8B. Specifically, the 16-bit KV cache occupies 16.3GB at a 124K input length. Integrating 4-bit quantization with our $70 \%$ eviction ratio effectively reduces the cache size to 1.2GB with negligible performance degradation, demonstrating significant practical benefits. Context-Independent Eviction. KVzip also supports context-independent eviction strategies, requiring only a one-time importance scoring per model and incurring no compression overhead after deployment [50]. Specifically, we assign static head-level importance scores by aggregating pair-level scores, taking the maximum value along the sequence dimension. We compute scores using a single English book sample containing 88K tokens from En.QA in SCBench [32] and apply DuoAttention’s head-level KV eviction strategy [50]. Figure 14 visualizes the obtained head-score distribution, comparing with scores derived from other data sources. Figure 11 compares KVzip against DuoAttention [50], using publicly released official head-scores on LLaMA3- 8B-Instruct-Gradient-1048K [18]. Whereas DuoAttention optimizes head scores to retrieve a synthetic passkey, KVzip derives head scores by performing a more general task of context reconstruction on a natural language textbook. Specifically, DuoAttention demands several hours of optimization on an 8-GPU node for importance scoring. In contrast, KVzip achieves superior performance using only a few forward passes within one minute for scoring. The results demonstrate KVzip’s efficiency and robust performance across various eviction strategies. Rel. performance 1.0 0.9 0.8 KVzip (head) R 0.7 DuoAttention 0.4 0.6 0.8 1.0 KV cache ratio # 4.3 Analysis Figure 10: Performance on various models averaged over 12 benchmark datasets. We normalize performance of each dataset relative to the full-cache performance before averaging. Appendix C provides detailed results per dataset, including results for LLaMA3.1-3B. Figure 11: Average relative performance across 12 benchmarks with head-level eviction. The lowest KV cache ratio is set to 0.4 due to DuoAttention’s lower limit of 0.32. Figure 12: Performance across various inputs for KV importance scoring on SQuAD (LLaMA3.1-8B). Necessity of Context Reconstruction. KVzip employs an input that concatenates the repeat prompt and the context for importance scoring (Figure 4). Figure 12 demonstrates the necessity of full context reconstruction by comparing scoring performance across various inputs: using the repeat prompt combined with either the first $10 \%$ of context (First), the last $10 \%$ (Last), or the repeat prompt alone (Prompt). Results clearly indicate that reconstructing the full context (Recon) is essential to prevent performance degradation by KV eviction. Table 1: Behavior analysis. Generation results on a privacy-related example from DecodingTrust [48], using LLaMA3.1-8B with full KV cache and a $40 \%$ compressed cache via KVzip. Behavior Analysis Beyond Task Solving. Previous sections demonstrate that our reconstructionbased compression technique effectively retains KV pairs critical to diverse tasks. Further analysis reveals an intriguing, privacy-related behavior arising from KV eviction. Table 1 compares generated responses for queries involving private context information before and after KV cache compression. Specifically, the LLaMA3.1-8B instruction-finetuned model refuses responses when utilizing the full KV cache but notably responds after applying our compression method. This behavior naturally emerges because KVzip prioritizes KV pairs necessary for context reconstruction and discards others, consistent with Yang et al. [53]. Although practical implications may be limited—since cached contexts typically imply permission for utilization—this observation suggests intersections between KV eviction techniques and shallow-alignment concerns [40], motivating further research exploration. # 5 Related Work KV Cache Compression. Compressing KV caches of Transformer-based models is crucial for efficient inference [47]. Sparse Transformer methods explicitly train models to utilize sparse or localized KV caches, reducing memory requirements during inference [11, 22, 27]. Compressive Transformer approaches further compress caches by merging KV pairs during training [3, 26, 43]. Liu et al. [36] show that Transformer-based LLMs exhibit contextual sparsity during inference, motivating dynamic KV eviction methods such as H2O and FastGen that operate during decoding without additional training [4, 9, 16, 35, 38, 52, 57]. SnapKV and PyramidKV specifically target KV eviction during long-context prefill [6, 15, 30], while DuoAttention profiles and selectively replaces attention heads with sliding-window attention prior to deployment [49, 50]. Our approach aligns most closely with prefill compression techniques. Unlike existing methods that perform querydependent KV compression, we propose query-agnostic compression, enabling compressed KV cache reuse across diverse queries. Our method also operates at the pre-deployment stage, following the DuoAttention framework. Recent studies have explored KV cache compression via quantization [33, 37]. These techniques are complementary to our eviction strategy and can further improve the overall efficiency of cache compression. Efficient LLM Inference. Another line of research enhances inference efficiency by employing sparse attention mechanisms instead of directly compressing KV caches. BigBird achieves efficiency by training models with sparse attention structures, reducing inference-time attention costs [55]. MInference leverages attention sparsity at inference without additional training [23]. Approaches including Quest reduce attention computations during decoding by leveraging KV cache offloading and retrieval techniques [10, 29, 34, 45]. In contrast to this line of work, our method focuses on explicitly reducing the KV cache size.
Transformer-based large language models (LLMs) cache context as key-value (KV) pairs during inference. As context length grows, KV cache sizes expand, leading to substantial memory overhead and increased attention latency. This paper introduces KVzip, a query-agnostic KV cache eviction method enabling effective reuse of compressed KV caches across diverse queries. KVzip quantifies the importance of a KV pair using the underlying LLM to reconstruct original contexts from cached KV pairs, subsequently evicting pairs with lower importance. Extensive empirical evaluations demonstrate that KVzip reduces KV cache size by 3-4$\times$ and FlashAttention decoding latency by approximately 2$\times$, with negligible performance loss in question-answering, retrieval, reasoning, and code comprehension tasks. Evaluations include various models such as LLaMA3.1-8B, Qwen2.5-14B, and Gemma3-12B, with context lengths reaching up to 170K tokens. KVzip significantly outperforms existing query-aware KV eviction methods, which suffer from performance degradation even at a 90% cache budget ratio under multi-query scenarios.
[ "cs.DB", "cs.LG" ]
# 1 Introduction Since the advent of large language models (LLMs), there has been ongoing debate about the utility of symbolic representations such as Abstract Meaning Representations (AMRs; Banarescu et al., 2013) in (LLM-based) pipelines and existing NLP tasks. While some studies report limited or negative impact of AMRs on mainstream NLP tasks (Jin et al., 2024), recent work has demonstrated their value in specific applications, such as syntactic simplification (Yao et al., 2024) and semanticallycontrollable text transformation (Li et al., 2025). Perhaps unsurprisingly, incorporating AMR has been particularly well-explored and effective in tasks related to semantics (Wein and Opitz, 2024). Natural language inference (NLI; Dagan et al., 2010) is a popular task in NLP where the solver is given a premise and a hypothesis, and asked to determine whether the hypothesis is true if the premise is true. The label space consists of three labels: entailment if the hypothesis is true, contradiction if the hypothesis is false, and neutral if the truth value of the hypothesis cannot be determined; this can also be condensed in two labels: entailment and non-entailment. As shown in Figure 1 “Athletes introduced the secretaries” should be entailed by “Serious athletes introduced the secretaries.” Therefore, the label should be entailment because the truth of the premise indicates truth of (or entails) the hypothesis. As a meaning-focused task, NLI aligns well with the motivation behind AMRs, i.e., to abstract sentence meaning beyond surface form, given NLI models’ tendencies to adopt shallow heuristics rather than understanding the relationship between the premise and the hypothesis, leading to poor generalization to novel data (Gururangan et al., 2018; Poliak et al., 2018; McCoy et al., 2019; Serrano et al., 2023). In this paper, we investigate whether incorporating AMRs as additional input - either during (a) fine-tuning or (b) prompting - can encourage models to attend more to abstract meaning, thereby improving generalization and overall performance. As illustrated in Figure 1, we add AMRs to either the training data or prompts then evaluate how the addition of AMR affects generalization performance. We find that AMRs generally hinder performance in both fine-tuning and prompting settings, with the exception of prompting on HANS. However, this improvement appears to stem from AMRs amplifying surface-level differences rather than capturing deeper semantic meaning. # 2 Related Work NLI (Dagan et al., 2010) is a hallmark task demonstrating model’s ability to understand natural language. Select neural models like BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) trained on datasets like Multi-genre NLI (MNLI; Williams et al., 2018) and Stanford NLI (SNLI; Bowman et al., 2015) provide test-set performance close to that of humans (Nangia and Bowman, 2019), but the near-human performance on MNLI has been attributed to models optimizing on the spurious correlations between lexical items and labels in the data (Poliak et al., 2018; McCoy et al., 2019; Gururangan et al., 2018; Serrano et al., 2023). The same models that excel in test-set performance suffer from poor generalization to other datasets that represent the same task (Zhou et al., 2020; McCoy et al., 2020; Delbari and Pilehvar, 2025). LLMs and in-context learning have been used to tackle NLI and generalization in it, with mixed results; Webson and Pavlick (2022) show that the content of prompts do not significantly influence LLMs’ performance in NLI tasks, while Kavumba et al. (2023); He et al. (2024) use chain-of-thought reasoning and natural language explanations to improve NLI performance and generalization. However, Zhong et al. (2023) report that its NLI performance is still only comparable to much smaller encoder-only models like BERT and RoBERTa (Devlin et al., 2019; Liu et al., 2019), leaving adversarial NLI an ongoing area of research. Recent work on AMRs has set out to utilize AMR graphs for a variety of downstream tasks, including summarization and information extraction (see Wein and Opitz (2024); Sadeddine et al. (2024) for comprehensive overviews). AMRs excel in capturing structure-dependent meaning (Leung et al., 2022) and have shown particular promise in meaning-sensitive tasks such as debiasing translationese (Wein and Schneider, 2024), style transfer (Hua et al., 2023), and sentence-level manipulation (Li et al., 2025), especially when used in conjunction with fine-tuned models. To the best of our knowledge, Opitz et al. (2023) represents the only prior effort to incorporate AMRs into NLI, and do so for the purpose of interpretable NLI evaluation. They find that metrics based on AMR are robust unsupervised representations of premise-hypothesis relationships when used alongside neural representations like BERT. # 3 Data & Experiments # 3.1 Data & Models In these experiments, we use two datasets: MNLI (Williams et al., 2018) and HANS (McCoy et al., 2019). MNLI is a crowdsourced dataset, with a test set that is not available to the public. We follow prior work (Wang et al., 2018; Devlin et al., 2019) in taking one of its two developmental splits as the evaluation dataset. Specifically, we take the matched developmental set to use as our evaluation dataset. The training dataset includes 297k sentence pairs, while the evaluation set contains around $1 0 \mathbf { k }$ pairs. HANS is a template-based evaluation dataset, with $3 0 \mathrm { k }$ examples. Unlike MNLI and other NLI datasets, its label space consists of only two labels–entailment and non-entailment. We follow prior work (McCoy et al., 2020; Min et al., 2020) in collapsing the model’s neutral and contradiction predictions to the single non-entailment label when calculating evaluation metrics. We use an off-the-shelf AMR parser from amrlib 1 to parse all the sentences from the two datasets we use. The model is BART-large (Lewis et al., 2019) fine-tuned on AMR 3.0 (Knight et al., 2021). While parsers with higher reported scores exist (e.g. Bevilacqua et al., 2021), we follow Uhrig et al. (2021); Opitz et al. (2023) in selecting an amrlib parser for ease of implementation. # 3.2 Experiment 1: Can fine-tuned models benefit from AMR in NLI? We train three sets of BERT-base models, augmented with AMR information to perform our experiment. We incorporate AMR in three ways: (1) linearized AMR is concatenated to text input $_ { \mathrm { + A M R } }$ as text); (2) graph neural network representation of AMR is concatenated to text representation $_ { \mathrm { + A M R } }$ as graph); and (3) just the linearized AMR is used as text input (AMR as text only). Table 1: Performance comparison with and without AMR on HANS and MNLI test sets in the fine-tuning setting. Both datasets measure accuracy. We adopt the setup and hyperparameters of previous work in MNLI fine-tuning and HANS evaluation (McCoy et al., 2020; Min et al., 2020), detailed in Appendix A. We integrate AMR into the models as text, via linearization, removing all newlines and whitespace sequences longer than length two. # 3.3 Experiment 2: Can prompt-based models benefit from AMR in NLI? In this experiment, we evaluate whether incorporating AMRs improves LLMs’ performance on NLI, on both the MNLI and HANS dataset, the latter of which remains challenging even after fine-tuning. Jin et al. (2024) find that only instruction-tuned GPT models are capable of reliably processing AMRs. We therefore restrict our evaluation to GPT-4o in zero-shot and 5-shot settings. The template can be found in Appendix B. We test three input conditions: (a) sentence only; (b) AMR only; and (c) sentence $^ +$ AMR. Label preprocessing follows the same procedure as in the fine-tuning setup for MNLI. In the 5-shot setting, we randomly sampled 5 examples from the training set of each data set. We set the temperature to 0 to ensure deterministic outputs. # 4 Results & Discussion # 4.1 Experiment 1 We report the accuracies of our fine-tuning models with and without AMRs in Table 1. We report numbers from prior work (McCoy et al., 2020; Min et al., 2020) in addition to our experiments to serve as comparison baselines and to ensure our setup is correct. Our reported numbers are an average across 10 runs with varying seed. As shown in Table 1, AMR augmentation does not yield improvements in MNLI performance, nor HANS generalization. Perhaps analogously to previous data-driven attempts at improving generalization (Clark et al., 2019; Min et al., 2020; Yaghoobzadeh et al., 2021), additional AMR information as either text or graph does not affect MNLI performance. An analysis in to their confusion matrices reveals AMR adds or subtracts little in terms of MNLI label decision boundary. On HANS performance, We discuss two main findings. Standalone AMR input for classification intensifies heuristics favoring the entailment label. AMR-only models predict the entailment label for $9 8 . 3 \%$ of HANS examples, compared to the baseline models at $9 4 . 7 \%$ . We attribute this to an intensified version of the baseline models’ heuristic correlating overlap between the hypothesis and premise to the entailment label, dubbed the lexical overlap heuristic (McCoy et al., 2019). We note this is concurrent with a still competitive MNLI performance, at $84 \%$ . We discuss this phenomenon in more detail in Appendices C.1 and C.2. Mixing AMRs and text leads to more (false) negative predictions in novel data. On the other hand, combining AMR information with text strongly affects HANS label decision boundaries in the opposite direction, overriding various shallow heuristics that favor the entailment label observed in McCoy et al. (2020) and in our baseline and AMR-only experiments. Our $+ A M R$ as text models $8 6 . 6 \%$ of HANS examples, and $+ A M R$ as graph models $8 6 . 9 \%$ , even predicting non-entailment on highly overlapping examples. We attempt to disentangle the effects of AMRs and text in a combined representation in Appendix C.2, where we find that while AMR can be used to perform NLI, it is less effective than text input and combining the two introduces new artifacts that are more difficult to interpret. # 4.2 Experiment 2 The results for prompting with GPT-4o are shown in Table 2. We report only the zero-shot results in the main text, as they yield similar overall performance and prediction patterns. Results for the five-shot setting are provided in Appendix F. Two main observations emerge. AMRs increase (false) negative predictions. As shown in the table, model performance is consistently lowest when prompted with AMRs alone, while including the original sentence improves results. We find this is because AMRs lead models to make more negative predictions (see Appendix D). To test this statistically, we fit a logistic regression model predicting non-entailment using SMATCH $^ { + + }$ (Opitz, 2023) between hypothesis and premise AMRs and data source (gold vs. predicted). A significant negative interaction ${ \mathrm { \mathit { \beta } } } = - 0 . 0 4 2$ , $p <$ 2e-16) shows that SMATCH similarity influences model predictions more than gold labels. Table 2: Performance comparison with and without AMR on HANS and MNLI test sets in the LLM zeroshot prompting setting. Figure 2: Accuracy of three prompt settings across different sentence lengths on MNLI. Further analysis reveals that AMR’s sensitivity to surface-level lexical and syntactic variation leads to low structural overlap between semantically equivalent expressions, misleading the model toward non-entailment. This also explains why, on the HANS test set, prompts that include both the sentence and its AMR lead to the highest rate of negative predictions: the AMR representation amplifies subtle differences between two otherwise similar strings, making semantic mismatches more salient and pushing the model toward rejecting entailment. Such nuanced contrasts are what HANS is designed to probe in language models, prompting GPT-4o to overpredict non-entailment. AMR does not lead to more robust performance with longer sequence length. Opitz et al. (2023) reported that incorporating AMRs improves robustness in NLI prediction. We investigate whether this finding holds for LLMs. Specifically, we plot accuracy across NLI examples binned by total sequence length (premise $^ +$ hypothesis). For sequences exceeding 100 words, we group them into a single bin due to their sparsity. As shown in Figure 2, when GPT-4o is prompted with both sentence and AMR inputs, accuracy slightly increases for inputs longer than 80 words. However, this performance remains lower than that of sentence-only prompts across most length bins. We find no evidence that AMR-only prompts enhance robustness to longer sequences. # 4.3 Summary Our fine-tuning experiments suggest that AMRonly models are still susceptible to heuristics. We also observe that combining text with AMR as both graph and text is challenging and results in a strong preference towards the non-entailment label, even for highly overlapping, entailing examples. Our LLM experiments showcases similar preference towards the non-entailment label. This suggests that AMRs effectively highlight subtle distinctions between minimal pairs, explaining improved HANS performance. However, for simpler examples, this heightened contrast can cause the model to overpredict No, even for entailing sentence pairs.
Natural Language Inference (NLI) relies heavily on adequately parsing the semantic content of the premise and hypothesis. In this work, we investigate whether adding semantic information in the form of an Abstract Meaning Representation (AMR) helps pretrained language models better generalize in NLI. Our experiments integrating AMR into NLI in both fine-tuning and prompting settings show that the presence of AMR in fine-tuning hinders model generalization while prompting with AMR leads to slight gains in \texttt{GPT-4o}. However, an ablation study reveals that the improvement comes from amplifying surface-level differences rather than aiding semantic reasoning. This amplification can mislead models to predict non-entailment even when the core meaning is preserved.
[ "cs.CL" ]
# 1 Introduction Preference optimization (PO) methods such as DPO (Rafailov et al., 2024) have shown success in improving LLMs’ performance in various tasks (Dubois et al., 2024). These methods usually involve a contrastive learning objective that encourages LLMs to generate a preferred response $y ^ { + }$ with higher probability and a dispreferred response $y ^ { - }$ with lower probability, given a query $x$ . Prior research (Tang et al., 2024; Razin et al., 2024) has shown the importance of selecting suitable response pairs for PO training. In particular, the contrastive training signals sent to LLMs are partly derived from the differences between $y ^ { + }$ and $y ^ { - }$ . These differences influence what LLMs can learn, which often do not exactly match the set of desirable differences to learn. This is because, aside from differences that we want models to learn (useful signals; e.g., $y ^ { + }$ is more helpful than $y ^ { - }$ in factoid question answering), there can be noisy differences (noisy signals). For instance, $y ^ { + }$ and $y ^ { - }$ can differ in features that are irrelevant for a task (e.g., different writing styles for factoid question answering) or that the differences are in an incorrect direction (e.g., $y ^ { + }$ is less correct than $y ^ { - } .$ ). Intuitively, if there are more noisy differences, then LLMs may not effectively learn the desired differences (e.g., to be more helpful) (See Figure 1). Although prior research (D’Oosterlinck et al., 2024; Wu et al., 2024) has investigated the correlation between certain proxies of "differences" (e.g., edit distance) and PO learning outcome, it does not distinguish noisy and desired differences, and therefore cannot accurately model the relationship. Therefore, we develop a metric called Distance Calibrated Reward Margin (DCRM) that aims to measure the density of desired differences among the total differences present. DCRM is the ratio between the reward margin, which is a proxy for the amount of desired differences, and two distance metrics (edit distance, probability difference), which are proxies for the total amount of differences. To study DCRM, we study three common types of preference datasets, categorized by their (1) response sources and (2) preference labeling scheme. We use Ultrafeedback (Cui et al., 2023) as the seed to construct the datasets, and find that different types of datasets vary in their average DCRM values. We train three base models (LLaMA-2-7B-Chat, LLaMA-3.2-1B-Instruct, Gemma-2B-IT) on these datasets and use AlpacaEval (Dubois et al., 2024), MT-Bench (Zheng et al., 2023), and Arena-Hard (Li et al., 2024) for evaluation. Across all settings, we notice a correlation between higher DCRM and better training outcomes. We further conduct a feature analysis to inspect the properties of each dataset and understand qualitatively what signals (i.e., noisy or desired differences) models learn after training. Inspired by the aforementioned correlation, we propose a method called Best of $N ^ { 2 }$ pairing to select response pairs with high DCRM, and show that training LLMs on the new datasets gives higher performance than on the original datasets. Our contribution is summarized as follows. • We propose a novel metric DCRM that measures the quality of a response pair for PO training. • We compare three common types of preference datasets and show a positive correlation between the average DCRM value of a training dataset and the training effectiveness. • We propose best-of- $N ^ { 2 }$ pairing, which selects response pairs with high DCRM values for effective PO training. # 2 Task Setup # 2.1 Problem Definition Let $\pi ( y | x )$ be a language model (LM) that places a probability distribution over response $y$ conditioned on input $x$ . Let $\mathcal { D } = \{ x _ { i } , y _ { i } ^ { + } , y _ { i } ^ { - } \}$ be a preference dataset where responses $y ^ { + }$ are preferred to $y ^ { - }$ . Offline preference optimization, like Direct Preference Optimization $( \mathrm { D P O } ) ^ { 2 }$ (Rafailov et al., 2024), use $\mathcal { D }$ to train model $\pi _ { \boldsymbol { \theta } }$ starting from the base model $\pi _ { \mathrm { r e f } }$ , by minimizing the following loss: $$ \begin{array} { r l r } { { \mathcal { L } _ { D P O } } } \\ & { } & { = - E _ { ( x , y ^ { + } , y - ) \sim D } [ \log \sigma ( \beta \log \frac { \pi _ { \theta } ( y ^ { + } | x ) } { \pi _ { \mathrm { r e f } } ( y ^ { + } | x ) } } \\ & { } & { - \beta \log \frac { \pi _ { \theta } ( y ^ { - } | x ) } { \pi _ { \mathrm { r e f } } ( y ^ { - } | x ) } ) ] } \end{array} $$ where $\beta$ is a hyperparameter. In this work, we aim to understand how qualitative and quantitative differences between $y ^ { + }$ and $y ^ { - }$ influence the learning behavior of DPO. # 2.2 Preference Datasets To guide our investigation, we group common techniques for preference dataset curation into 3 categories, according to two axes: source distribution of the response $y$ , and the preference labeling function (see Figure 2). Commonly Used Preference Datasets Same Source with RM Different Sources with Different Sources with Preferences RM Preferences Fixed Preferences (SS-RM) (DS-RM) (DS-Fix) 𝑦1 RM 𝑦+ 𝜋1 𝑦1 RM 𝑦+ 𝑦+ x : x : : x 𝜋str𝑜𝑛𝑔 𝜋𝑟𝑒𝑓; 𝑦− 𝑦 𝑦− 𝜋𝑜𝑡ℎ𝑒𝑟 𝑦𝑁 𝜋𝑀 𝑦𝑀 𝜋𝑤𝑒𝑎𝑘 Used in: Used in: Used in: • DPO (Rafailov et al., • CPO (Xu et al., 2024) • SPIN (Chen et al., 2024) 2024) Ultrafeedback binarized • APO (D’Oosterlincket • SimPO (Meng et al., 2024) (Tunstall et al., 2023) al., 2024) ODPO (Amini et al., 2024) • ORPO (Hong et al., 2024) • Intel/orca_dpo_pairs • IPO (Azar et al., 2023) etc. etc. • etc. Same Source w/ RM Preference (SS-RM) The original DPO work (Rafailov et al., 2024) proposed to sample $y ^ { + }$ and $y ^ { - }$ from the same model, $\pi _ { \mathrm { r e f } }$ $( \mathrm { { S S } } _ { \pi _ { \mathrm { r e f } } } )$ , and derive the preference labels using a reward model. This has been widely adopted in follow-up works (Meng et al., 2024; Amini et al., 2024; Azar et al., 2023; Lai et al., 2024). Note that $y ^ { + }$ and $y ^ { - }$ can also be from the same source that is not $\pi _ { \mathrm { r e f } }$ $( \mathrm { S S } _ { \pi _ { \mathrm { o t h e r } } } )$ , meaning that these datasets can be re-used to train a different base LLM too. Diff Source w/ RM Preference (DS-RM) Earlier work in DPO used output pairs sampled from two different humans (Köpf et al., 2023) or models (Ultrafeedback binarized (Cui et al., 2023; Tunstall et al., 2023); Argilla-OpenOrca3) to construct the dataset (i.e., $y ^ { + }$ is from a different source than $y ^ { - } .$ ). The preference labels were typically assigned using a reward model or LLM-based judges. This dataset construction is agnostic to the choice of the policy $\pi _ { \mathrm { r e f } }$ . Once created, these datasets can again be re-used without additional sampling or preference labeling overhead for any new choice of $\pi _ { \mathrm { r e f } }$ (Wu et al., 2024; Hong et al., 2024; Bai et al., 2022). Diff Source w/ Fixed Preference (DS-Fix) It is possible to have a prior estimate of the relative strengths of two sampling sources (e.g. using rankings on benchmarks like Chatbot-Arena (Chiang et al., 2024)). In such scenarios, instance-level preference between 2 responses from different sources can be assigned based on model-level rankings (i.e., $y ^ { + }$ is always from a "stronger" model than $y ^ { - } \mathrm { . }$ ). Methods such as SPIN (Chen et al., 2024) have successfully used such strategies (setting $y ^ { - } \sim \pi _ { \mathrm { r e f } } ,$ ) while others (D’Oosterlinck et al., 2024) report suboptimal performance with these datasets. # 2.3 Measuring density of desired differences Our goal is to study how corpus-level differences in preference pairs impact models’ learned behavior after DPO. We quantify the difference between $y ^ { + }$ and $y ^ { - }$ using a combination of three metrics, which we explain and motivate below: Token-level edit distance $( e _ { \Delta } )$ between $y ^ { + }$ and $y ^ { - }$ is the first distance metric that we use. It is the token-level Levenshtein distance between 2 outputs. $e _ { \Delta }$ is easily computable and $\pi _ { \mathrm { r e f } }$ agnostic. It captures differences in length, lexicon, syntax, etc. $\pi _ { \mathrm { r e f } } \mathbf { \bar { s } }$ LogProb Difference $( p _ { \Delta } )$ is the second distance metric that we use. It is computed as $\vert \log \pi _ { \mathrm { r e f } } ( y ^ { + } \vert x ) - \log \pi _ { \mathrm { r e f } } ( y ^ { - } \vert x ) \vert$ . $p _ { \Delta }$ measures the difference in probability mass placed on $y ^ { + }$ and $y ^ { - }$ by $\pi _ { \mathrm { r e f } }$ . It captures a different notion of “distance” from edit-distance; two samples can be very different lexically but be assigned similar probability by $\pi _ { \mathrm { r e f } }$ , or vice versa. These are tougher for the implicit reward model in DPO to distinguish, and this measure helps us account for such instances. Reward Margin $( r _ { \Delta } )$ measures the difference in rewards from a reward model RM. It is computed as $r _ { \Delta } = r _ { y ^ { + } } - r _ { y ^ { - } }$ , where $r _ { y }$ is the reward score RM assigns to an output $y$ . This reward margin quantifies the desired differences in targeted (relevant) features between the two outputs, irrespective of their lexical and probability differences. We combine these to construct a single metric that measures the density of “desired” differences between two outputs. We call this distancecalibrated reward margin (DCRM): $$ \mathrm { D C R M } ( y ^ { + } , y ^ { - } ) = \frac { \sigma ( r _ { \Delta } ) - 0 . 5 } { e _ { \Delta } + p _ { \Delta } + \epsilon } $$ We omit $( y ^ { + } , y ^ { - } )$ as the arguments for $\boldsymbol { r } _ { \Delta } , \boldsymbol { e } _ { \Delta } , p _ { \Delta }$ for brevity and include constant $\epsilon = 1$ for numeric stability. The numerator captures the normalized reward margin4 between $y ^ { + }$ and $y ^ { - }$ (a 0-centered Bradley-Terry model (Bradley and Terry, 1952)), and the denominator measures their distances (i.e., lexical and probabilistic differences).5 We hypothesize that when the useful contrast signals (desired differences, measured by $r _ { \Delta }$ ) are a large fraction of the total differences (measured by $e _ { \Delta } + p _ { \Delta } )$ in the response pair (i.e., useful signals are dense), training becomes more effective. DCRM captures this hypothesis. A high DCRM implies (1) a high reward margin between $y ^ { + }$ and $y ^ { - }$ (i.e. there are many desired differences between the two for $\pi _ { \mathrm { r e f } }$ to learn from) and (2) low distances between the two (i.e., the total differences are small). In this case, training signals are more meaningful and less noisy for the LLMs to learn effectively.6 # 3 Experiment Setup # 3.1 Training Setup Models We experiment with three options for our base model $( \pi _ { \mathrm { r e f } } )$ . They include LLaMA2 (LLaMA-2-7B-Chat; Touvron et al. (2023b)), LLaMA3.2 (LLaMA-3.2-1B-Instruct; Grattafiori et al. (2024), and an extra model from other series Gemma (Gemma-2B-IT; Mesnard et al. (2024)). We train each of these models using the DPO objective for 2 epochs, and select the best checkpoint based on validation performance. Please refer to Appendix B for other training details. Due to length constraints, we report results for LLaMA2 and LLaMA3.2 in the main paper, and put the results for Gemma in Appendix E. Table 1: Statistics of the datasets. Each metric value is averaged across examples. Changing $\pi _ { \mathrm { r e f } }$ changes $p _ { \Delta }$ and so we report separate statistics for LLaMA2 and LLaMA3.2. The reported DCRM values are scaled 1k times for visualization, which does not affect correlation analysis. SS-RM datasets have the highest DCRM while DS-Fix ones have the lowest DCRM. We use the overall scores from the reward model ArmoRM (Wang et al., 2024a) to compute $r \Delta$ . Preference Datasets We use the 60K prompts from Ultrafeedback (Cui et al., 2023). We create our preference datasets using responses sampled from four different models across the three settings (SS-RM, DS-RM, DS-Fix) described in $\ S 2 . 2$ . For SS-RM, we sample responses from the base model $\pi _ { \mathrm { r e f } }$ . We also use Gemma-2-9B-IT (Gma2) and Mistral-7B-Instruct-v0.2 (Mst) as two extra sources of responses. For each source, we follow Meng et al. (2024) and sample $N = 5$ responses and then select the best response pair with the highest $r _ { \Delta }$ using the reward model RM. For DS-RM, we fix the source distributions to Gemma-2-9B-IT (Gma2) and Mistral-7B-Instructv0.2 (Mst). We sample one response from each, and decide the preference label using RM. We find that roughly $70 \%$ of $y ^ { + }$ comes from Gma2 and $70 \%$ of $y ^ { - }$ comes from Mst. For DS-Fix, we use the same response pairs as DS-RM, but always set $y ^ { + }$ to be from Gemma-2-9BIT (stronger model) and $y ^ { - }$ to be from Mistral-7BInstruct- $\cdot \mathrm { v } 0 . 2$ (weaker model), respectively. Dataset Statistics Table 1 shows the dataset statistics. As expected, SS-RM datasets, which get the paired responses from the same source, have the lowest $e _ { \Delta }$ and $p _ { \Delta }$ , leading to the highest overall DCRM. DS-RM has higher distances and consequently lower DCRM. Surprisingly, we find that DS-Fix has the lowest reward margin even though its samples have a higher lexical difference. This makes it have the lowest DCRM across the three settings. Table 2: Main Results; AP-L: Length-Controlled Win Rate on AlpacaEval; AP-R: Raw Win Rate on AlpacaEval; MT: MT-Bench Score; AH: Arena-Hard Win Rate; SS-RM datasets generally lead to the best performance while DS-Fix ones lead to the worst performance. # 3.2 Quantitative Evaluation We evaluate the general conversational and instruction-following abilities of our trained models $\pi _ { \boldsymbol { \theta } }$ using three chat benchmarks, AlpacaEval, MT-Bench, and Arena-Hard. AlpacaEval reports the models’ win rates against a baseline model, GPT-4-1106-Preview (Achiam et al., 2024). ArenaHard run similar evaluations, with GPT-4-0314 as the baseline model. MT-Bench is a multi-turn conversational benchmark and uses a judge model to score the model’s responses on a scale of 10.7 # 4 Comparing Different Types of Preference Datasets In this section, we compare models that are trained on different types of preference datasets, and establish a correlation between the dataset-level DCRM value and downstream performances. We report the results in Table 2. # Sampling from the same source distribution (SS-RM) outperforms other methods. Table 2 Figure 3: DCRM is positively correlated with models’ performance boost on AP-L. PCC: Pearson Correlation Coefficient; Y axis: change in AP-L after training. Each point in the diagram corresponds to a trained model. shows that sampling response pairs from the same distribution ( $\pi _ { \mathrm { r e f } }$ and others) and deriving preferences using the reward model perform better than DS-RM and DS-Fix. In particular, training with responses from $\pi _ { \mathrm { r e f } }$ gives the best performance, which mirrors findings from prior work (Tang et al., 2024). Relating back to Table 1, SS-RM datasets also have the highest DCRM value. To our surprise, SS-RM Gma2 is on par with ${ \mathsf { S S - R M } } \pi _ { \mathrm { r e f } }$ when $\pi _ { \mathrm { r e f } } =$ LLaMA3.2. Consulting Table 1, we see that SS-RM Gma2 has a lower $p _ { \Delta }$ than that of LLaMA3.2, possibly explaining this result. DS-Fix performs worse than the base model. This technique performs the worst among the three dataset settings. Similar results have also been reported by D’Oosterlinck et al. (2024). In fact, we find that its performance is worse than even the starting model. In Appendix A, we show that there are consistent stylistic differences between the two source distributions (e.g. presence of more emojis in $Y ^ { + }$ than $Y ^ { - }$ ), which is reflected in the model’s output after training. Again, relating back, DS-Fix datasets also have the lowest DCRM value. DCRM is positively correlated with model performance after training. With the above observations, we formally quantify the correlation between DCRM and downstream performance. To include sufficient data points, we sample multiple outputs from the source distributions and select response pairs that vary the dataset-level $p _ { \Delta } , e _ { \Delta }$ , and $r _ { \Delta }$ .8 We compute the performance boost, i.e. the AP-L improvement of $\pi _ { \boldsymbol { \theta } }$ against $\pi _ { \mathrm { r e f } }$ , and show its correlation with DCRM in Figure 3.9 We find that DCRM and downstream performance are moderately positively correlated, with a Pearson Correlation of 0.59, which is stronger than the individual metrics – correlation with $e _ { \Delta } , p _ { \Delta }$ , and $r _ { \Delta }$ is -0.51, -0.55, and 0.43 respectively (See Appendix F.1). We observe a saturation effect once DCRM passes 0.075, and suspect this to be caused by the inherent limitations of the reward model. # 5 Operationalizing DCRM In $\ S 4$ , we observe that higher DCRM is correlated with better training outcomes. Can we use this correlation to guide training dataset selection? Approach An answer is to sample responses from $\pi _ { \mathrm { r e f } }$ . However, this can be expensive with a large model or dataset. Instead, we want to investigate how to select the best response pair from an existing pool of responses, Formally, given $N$ responses $\{ y _ { 1 } , \cdots , y _ { N } \}$ (and also $\{ y _ { N + 1 } , \cdot \cdot \cdot , y _ { 2 N } \}$ from a second model in the DS setting), we propose to select the pair $( y _ { i } , y _ { j } )$ with the highest DCRM. We denote this as Best of $N ^ { 2 }$ pairing $( \mathbf { B o } N ^ { 2 } )$ , since we select the best pair from $N \times N$ candidates. Our method is different from the conventional method (used in SS-RM), which chooses the pair with the highest reward margin by setting $y ^ { + }$ and $y ^ { - }$ to the response with the highest and lowest reward scores. Setup We apply our method to three baselines. In the Same Source (SS-RM) setting, we reselect the response pair using the existing $N$ responses sampled from (1) $\pi _ { \mathrm { r e f } }$ , or (2) Mst. In the Different Sources $( \mathsf { D S - R M } ) ^ { 1 0 }$ setting, we use (3) Gma2-Mst as the third baseline, and select a response pair with the highest DCRM while maintaining the condition that $y ^ { + }$ and $y ^ { - }$ come from different sources.11 Table 3 gives a comparison between the original and reselected datasets. After reselection with DCRM, both $e _ { \Delta }$ and $p _ { \Delta }$ decrease, while $r _ { \Delta }$ stays in a reasonable range without too much drop. # 5.1 Main Results We compare $B o N ^ { 2 }$ against the baselines in Table 4. Best of $N ^ { 2 }$ pairing increases performance across all settings. When training LLaMA3.2, we observe a higher performance across all baselines. When training LLaMA2, performance increases notably on top of both Mst (SS-RM) and Gma2- Mst (DS-RM), especially for the latter. Table 3: Statistics of the original and new datasets; w/ $B o N ^ { 2 }$ indicates datasets whose response pairs are reselected using best-of- $N ^ { 2 }$ method. They have a higher DCRM value than their original counterparts. Table 4: Main Results; $B o N ^ { 2 }$ datasets give a stronger performance than their original counterparts. However, performance only increases marginally in the LLaMA2 $\pi _ { \mathrm { r e f } }$ (SS-RM) setting. We suspect that most responses from LLaMA2 are similar to each other. In this case, maximizing the reward margin will not incur very high distances, so the response pairs from $\pi _ { \mathrm { r e f } }$ (SS-RM) are already close to the best. There is little room for improvement no matter how we reselect the pairs. This is evident in Table 3, where we observe a smaller reduction in $e _ { \Delta }$ and $p _ { \Delta }$ compared with every other setting. # 5.2 Ablation Study Since DCRM is composed of three metrics, we do an ablation study of our method in the $\pi _ { \mathrm { r e f } }$ (SS-RM) Table 5: Ablation Study on DCRM in the SS-RM setting; Removing $p _ { \Delta }$ or $e _ { \Delta }$ hurts performance slightly, while removing $r _ { \Delta }$ significantly reduces performance. setting. We remove one of $p _ { \Delta } , e _ { \Delta }$ , or $r _ { \Delta }$ from DCRM and reselect the response pair. Table 5 shows that removing $p _ { \Delta }$ gives a performance close to that of the complete metric, while removing $e _ { \Delta }$ slightly hurts performance. In Appendix I, we show that removing either of these in the Mst (SS-RM) and DS-RM settings can still give a performance boost over the original datasets, which means in these settings our method can be effective with a cheaper computation. Removing $r _ { \Delta }$ makes training much less effective. This is expected, since without $r _ { \Delta }$ our method selects response pairs that have the smallest distances and are minimally different. This not only eliminates noisy differences, but also those useful ones. # 6 Qualitative Analysis (Feature-Analysis) $\ S 4$ and $\ S 5$ show the correlation between the DCRM value of a training set and quantitative performance. We also want to inspect whether these datasets have qualitative differences, to validate our starting motivation that connects performance with data quality (i.e., more desired differences and fewer noisy ones between $y ^ { + }$ and $y ^ { - }$ make PO more effective), and better ground DCRM with this quality. We analyze the feature differences between $y ^ { + }$ and $y ^ { - }$ . We define relevant features (correctness, helpfulness, etc.) as those that the LLMs should learn, and irrelevant features (writing style, sarcasm, tone, etc.) as those not targeted by the task. Features To align with the reward signals, we use the 11 features (de-duplicated) from the ArmoRM reward model as the relevant features. These include helpfulness, truthfulness, etc. We manually define 21 irrelevant features that are roughly orthogonal to these relevant features (See the full lists in Appendix C.1). The useful training signals come from differences between $y ^ { + }$ and $y ^ { - }$ that are along relevant features and are pointing in the correct direction $y ^ { + }$ is better than $y ^ { - }$ for a relevant feature), which we call desired feature differences. Metrics We define $f _ { \Delta }$ as the number of features along which $y ^ { + }$ and $y ^ { - }$ differ. To measure the fraction of desired feature differences, we define $f _ { \Delta } ^ { \mathrm { d e s } }$ as the fraction of features in $f _ { \Delta }$ that are (a) relevant and (b) contrasted in the correct direction (i.e. $y ^ { + }$ is “better” than $y ^ { - }$ for that feature). Fraction of features that only satisfy condition (a) is denoted by $f _ { \Delta } ^ { \mathrm { r e l } }$ . Similar to DCRM, $f _ { \Delta } ^ { \mathrm { d e s } }$ indicates the ratio of useful contrast signals among noisy signals. To compute these, we prompt GPT-4o-mini0718 to (1) identify the three most prominent features that differ between the two responses (setting $f _ { \Delta } { = } 3 )$ and (2) indicate a contrast direction for each feature if applicable (i.e., whether $y ^ { + }$ is better). Referring to the list of relevant features, we can then compute fr∆el and fd∆es. Note that we can use this to study the training dataset (i.e. $Y ^ { + } – Y ^ { - } )$ ), and the learned differences after training $( Y _ { \mathrm { t r a i n e d } } { - } Y _ { \mathrm { r e f } } )$ . Analysis of training datasets $( Y ^ { + } - Y ^ { - } )$ To study the feature differences LLMs see during training, we compute the average $f _ { \Delta } ^ { \mathrm { r e l } }$ and $f _ { \Delta } ^ { \mathrm { d e s } }$ across 200 randomly sampled $( y ^ { + } , y ^ { - } )$ from the training dataset. Higher $f _ { \Delta } ^ { \mathrm { d e s } }$ implies higher dataset quality. Analysis of learning outcomes $( Y _ { \mathrm { t r a i n e d } } - Y _ { \mathrm { r e f } } )$ To study what LLMs actually learn after training, we compute $f _ { \Delta } ^ { \mathrm { r e l } }$ and $f _ { \Delta } ^ { \mathrm { d e s } }$ for 200 randomly sampled $( y _ { \mathrm { t r a i n e d } } \sim \pi _ { \theta } ( x ) , y _ { \mathrm { r e f } } \sim \pi _ { \mathrm { r e f } } ( x ) )$ pairs where $x$ is a test prompt in the AlpacaEval dataset. Higher $f _ { \Delta } ^ { \mathrm { d e s } }$ implies that the model learns more useful signals (e.g., to be more helpful) and fewer noisy ones (e.g., to be more sarcastic). Following $\ S \ 4$ and $\ S \ 5$ , we compare different preference datasets in $\ S \ G . 1$ , and then show how $B o N ^ { 2 }$ can improve response pair quality in $\ S 6 . 2$ . # 6.1 Comparing Common Preference Datasets We present the results in Table 6 to understand (1) what the model sees during training and (2) what it actually learns. DS-Fix datasets have the lowest proportion of desired feature differences in its training data. Analyzing the training set $Y ^ { + } { - } Y ^ { - }$ , we see that response pairs from $\pi _ { \mathrm { r e f } }$ (SS-RM) have the highest percentage of desired feature differences, indicating the highest quality. On the other hand, DS-Fix has the lowest percentage. These results are consistent with our observations in Table 2. Surprisingly DS-RM has a higher $f _ { \Delta } ^ { \mathrm { d e s } }$ than Gma2 (SS-RM) and Mst (SS-RM). A possible explanation will be their Table 6: $f _ { \Delta } ^ { \mathrm { d e s } }$ : Percentage of desired feature differences among the identified feature differences; $f _ { \Delta } ^ { \mathrm { r e l } }$ : Percentage of relevant feature differences; $Y ^ { + } { - } Y ^ { - }$ : differences identified between $y ^ { + }$ and $y ^ { - }$ in the training set; $Y _ { \mathrm { t r a i n e d } } – Y _ { \mathrm { r e f } }$ : differences identified between model’s output on AlpacaEval after training $( Y _ { \mathrm { t r a i n e d } } )$ and before training $( Y _ { \mathrm { r e f } } )$ . SS-RM datasets typically have the highest $f _ { \Delta } ^ { \mathrm { d e s } }$ , followed by DS-RM and then DS-Fix. actual marginal differences in dataset quality since at least 1 side of the response sources overlap. Desired feature differences learned by the model are proportional to their presence in the training set. Our initial observation is that higher $f _ { \Delta } ^ { \mathrm { d e s } }$ in the training dataset (i.e. $Y ^ { + } – Y ^ { - } \rangle$ generally induces higher $f _ { \Delta } ^ { \mathrm { d e s } }$ in $Y _ { \mathrm { t r a i n e d } } – Y _ { \mathrm { r e f } }$ . This indicates a consistency between the training set and learned outcome for desired feature differences. To analyze this trend in a fine-grained manner and for more general feature differences, we do the following case study in the LLaMA2 $\pi _ { \mathrm { r e f } }$ (SS-RM) setting. In general, feature differences learned by the model are proportional to their presence in the training set. We inspect the distribution of feature differences per category (i.e., the percentage of each kind of feat. diff. among all the identified feat. diff.). Figure 4 shows that for both relevant and irrelevant features, the distributions for $Y ^ { + } { - } Y ^ { - }$ and $Y _ { \mathrm { t r a i n e d } } – Y _ { \mathrm { r e f } }$ are similar, with a KL divergence of 0.2109 and 0.1284 respectively, so more prominent feature differences in the training set are picked up by the model more after training.12 6.2 Effect of Applying Best-of- $N ^ { 2 }$ Pairing Best of $N ^ { 2 }$ pairing produces datasets with a higher proportion of desired feature differences. We conduct the same feature-based analysis as in $\ S \ G . 1$ . Table 7 indicates that in most settings, the datasets produced by our method have a higher percentage of desired feature differences (See $f _ { \Delta } ^ { \mathrm { d e s } }$ in $Y ^ { + } – Y ^ { - } )$ , which guides the models to learn effectively and do better in relevant features after training (See $f _ { \Delta } ^ { \mathrm { d e s } }$ in $Y _ { \mathrm { t r a i n e d } } – Y _ { \mathrm { r e f } } )$ . In the LLaMA2 $\pi _ { \mathrm { r e f } }$ (SS-RM) setting, $f _ { \Delta } ^ { \mathrm { d e s } }$ in $Y ^ { + } { - } Y ^ { - }$ remains approximately the same after applying our method, which can be caused by what we discuss in $\ S 5 . 1$ . Figure 4: Distributions of relevant (top) and irrelevant (bottom) feature differences. Each pair of adjacent blue and orange bars represents the percentage of a kind of feat. diff. $( y ^ { + }$ more helpful, $y ^ { - }$ less truthful, etc.) among the identified feat. diff. Blue: training set differences $( Y ^ { + } – Y ^ { - } )$ ; Orange: differences in model outputs on AlpacaEval after or before training $( Y _ { \mathrm { t r a i n e d } } { - } Y _ { \mathrm { r e f } } )$ . $Y ^ { + } – Y ^ { - }$ and $Y _ { \mathrm { t r a i n e d } } – Y _ { \mathrm { r e f } }$ have similar distributions. # 7 Related Work Preference Optimization Preference Optimization is an alternative to traditional RLHF methods (Ouyang et al., 2022) such as PPO (Schulman et al., 2017). It avoids the need for an explicit reward model. Popular PO algorithms includes DPO (Rafailov et al., 2024), IPO (Azar et al., 2023), KTO (Ethayarajh et al., 2024), R-DPO (Park et al., 2024), SimPO (Meng et al., 2024), CPO (Xu et al., 2024), ORPO (Hong et al., 2024), and so on. Many papers report performance increases on AlpacaEval when training LLMs using PO methods on chat datasets (Ding et al., 2023; Cui et al., 2023). Response Pairs The choice of response pairs in PO affects training outcomes. Tajwar et al. (2024) Table 7: Results for feature-based analysis. $B o N ^ { 2 }$ datasets have a higher $f _ { \Delta } ^ { \mathrm { d e s } }$ in most settings. and Tang et al. (2024) investigate response sources and illustrate the benefits of sampling responses on policy. Another line of work focuses on the differences between $y ^ { + }$ and $y ^ { - }$ . Prior work (Fisch et al., 2024; Amini et al., 2024; Furuta et al., 2024) suggests that LLMs should learn a different reward margin for each example, since different response pairs can vary in their contrastiveness (i.e., $y ^ { + }$ is much or only a little better than $y ^ { - }$ ). In reality, however, $y ^ { + }$ and $y ^ { - }$ often differ in features irrelevant for the task, and a larger gap between them is not always desirable. Certain work focuses on eliminating specific irrelevant differences such as length (Singhal et al., 2023). Others take a more general perspective. Wu et al. (2024) use reward margins to measure differences and dynamically scales the training signals for each example. D’Oosterlinck et al. (2024) and Guo et al. (2024) construct minimally different pairs by revising $y ^ { - }$ with a stronger LLM to get $y ^ { + }$ . However, these methods either do not accurately model the relationship between response pair differences and quality, or require a stronger LLM to be present.
Recent research has attempted to associate preference optimization (PO) performance with the underlying preference datasets. In this work, our observation is that the differences between the preferred response $y^+$ and dispreferred response $y^-$ influence what LLMs can learn, which may not match the desirable differences to learn. Therefore, we use distance and reward margin to quantify these differences, and combine them to get Distance Calibrated Reward Margin (DCRM), a metric that measures the quality of a response pair for PO. Intuitively, DCRM encourages minimal noisy differences and maximal desired differences. With this, we study 3 types of commonly used preference datasets, classified along two axes: the source of the responses and the preference labeling function. We establish a general correlation between higher DCRM of the training set and better learning outcome. Inspired by this, we propose a best-of-$N^2$ pairing method that selects response pairs with the highest DCRM. Empirically, in various settings, our method produces training datasets that can further improve models' performance on AlpacaEval, MT-Bench, and Arena-Hard over the existing training sets.
[ "cs.CL" ]
# I. INTRODUCTION Camera and LiDAR are two of the most popular sensors applied in autonomous driving. The camera captures colorful images with dense semantic context, while the LiDAR measures distances of sparse points with intensity that reflect the rough outline of the ambient scene. Their data fusion compensates the limitations of stand-alone sensors and has been involved in a large variety of downstream intelligent transportation tasks, such as 3D object detection [1], [2], simultaneously localization and mapping (SLAM) [3], [4] and scene flow estimation [5], [6]. Camera-LiDAR calibration is the prerequisite for the aforementioned tasks, since it establishes the spatial relationship between the two sensors. The evolution of deep learning techniques has significantly advanced the development of learning-based calibration methods [7]–[12]. These methods either explicitly or implicitly identify correspondences between image and point cloud features to predict the corrections to the extrinsic parameters. Yet, most of these approaches produce calibration results in a single step, thereby leaving subsequent states after the initial adjustment unexploited. This oversight may limit the final accuracy because further refinements could improve accuracy, especially when the initial error is substantial. Fig. 1. The proposed surrogate diffusion for camera-LiDAR calibration. The diffusion variable ${ \mathbf { } } _ { \pmb { x } _ { t } }$ controls the correction factor $\mathcal { G } ( \pmb { x } _ { t } )$ applied to the initial extrinsic matrix $\pmb { T } _ { C L } ^ { ( 0 ) }$ to generate noisy samples $\dot { \mathcal { G } } ( \boldsymbol { x } _ { t } ) \dot { T } _ { C L } ^ { ( 0 ) }$ . The state that contains Gaussian noise, while the denoising process reverses it by applying a trainable surrogate $S _ { \theta }$ . To address this issue, CalibNet [7] introduces a straightforward single-model iterative approach: for each iteration, the output of the surrogate is used to correct the input extrinsics, forming the input of the next iteration. However, the success of this iterative process heavily relies on the original model’s capability and robustness, specifically its ability to enhance accuracy across a wide range of initial errors. Multi-range iteration [9] alleviates this issue by training different models for various error ranges. Each model is tasked with reducing the calibration error to the next lower level, allowing the entire system to incrementally minimize error to the lowest possible range. Despite success in improving calibration accuracy, it necessitates separate training, inference, and storage for each model. This need for additional memory and computational resources presents challenges for online calibration, particularly when deploying on edge-computing devices in autonomous vehicles. In this study, we propose an innovative single-model iterative method that can improve any surrogate model through diffusion. To the best of our knowledge, this is the first application of diffusion in the context of camera-LiDAR calibration. As illustrated in Fig. 1 (with notations defined in Sec. III), the original method serves as a surrogate to iteratively refine the initial extrinsic matrix until it converges to the ground-truth matrix. The main contributions of our paper are outlined below: A linear surrogate diffusion (LSD) pipeline is proposed for single-model iterative camera-LiDAR calibration optimization. It is denoiser-agnostic and applicable to any individual calibration method. • We analyze the data flow of our iterative approach and develop an intermediate buffer to enhance efficiency during the reverse LSD process. Extensive experiments on the KITTI dataset [13] have been conducted to validate the effectiveness and efficiency of our proposed diffusion method. The remainder of this paper is organized as follows. Section II reviews recent target-based and targetless calibration methods; Section III introduces the pipeline of our surrogate diffusion model; Section IV presents the experimental settings and results; Section V summarizes our findings and discusses our future study. # II. RELATED WORKS # A. Target-Based Calibration Methods Target-based calibration determines the extrinsic matrix between camera and LiDAR by utilizing a specific target that incorporates geometric constraints between corresponding 3D points in the point cloud and pixels in the 2D image. Calibration targets are classified into planar and 3D objects based on their shapes. Planar targets include chessboards [14]–[16], triangular boards [17], [18] and boards with holes [19]–[21]. In contrast, 3D calibration tools comprise V-shaped [22] and box-shaped objects [23]. Despite high accuracy and reproducibility, target-based calibration methods encounter several challenges, including the requirement for manual target placement in diverse positions and limited suitability for online calibration. Furthermore, determining certain hyperparameters, such as target size and calibration distance, remains challenging across different sensor systems. # B. Targetless Calibration Methods Instead of relying on the introduction of specific calibration targets, targetless methods leverage information extracted from natural scenes for calibration. These methods can be broadly categorized into four groups [24]: ego-motion-based, feature-based, information-based, and learning-based. Ego-motion-based methods hinges on geometric constraints spanning multiple frames, exemplified by techniques like hand-eye calibration [25], [26] and modality-consistent 3D reconstruction [27]–[29]. Featurebased methods solve extrinsics through cross-modal feature extraction and matching, using hand-crafted features such as edge points [30]–[32] and planar constraints [33], or mask matching based on semantic information [34]–[36]. Information-based methods optimize an information metric like mutual information [37], [38] or normalized mutual information [39], [40]. Learning-based methods learn crossmodal correspondences [41]–[43] or employ a end-to-end calibration network [7]–[10], [44]. # C. End-to-End Learning-based methods End-to-end learning-based methods are central to our research. CalibNet [7] exemplifies a typical end-to-end calibration network, using ResNet [45] to extract features from camera and LiDAR data, which are then fused via convolutional and MLP layers. Building upon this framework, RGGNet introduces a regularization loss to guide the network in predicting extrinsics that align with the groundtruth data distribution. LCCNet [9] enhances accuracy with a feature-matching layer that explicitly aligns deep features of images and point clouds, employing multi-range iterations. LCCRAFT [10] simplifies the encoders of LCCNet [9] and utilizes a RAFT-like [46] architecture for iterative and alternating optimization of extrinsic and feature matching predictions. CalibDepth [44] utilizes monocular depth maps to enhance cross-modality feature matching and implements LSTM for multi-step prediction. In our experiments, we selected CalibNet, RGGNet, LCCNet, and LCCRAFT as surrogate denoisers due to their identical input modalities. To validate the effectiveness of our iterative approach, we combined these models with various iterative techniques to assess performance improvements. We selected two additional single-model iterative approaches as baselines: the straightforward iterative method proposed in [7] and SE(3) Diffusion [47], which was originally developed for point cloud registration and is related to our LSD. We adapted SE(3) Diffusion for camera-LiDAR calibration to enable a comparative analysis. # III. METHOD # A. Problem Setting Let $\boldsymbol { \mathit { I } }$ represent the RGB image captured by the camera and $P$ denote the LiDAR point cloud. Define the relative transformation from LiDAR to camera as $\pmb { T } _ { C L } \in \mathrm { S E } ( 3 )$ and have known $\kappa$ and an initial guess $\pmb { K }$ $T _ { C L } ^ { g \bar { t } }$ , denoted as $\pmb { T } _ { C L } ^ { ( 0 ) }$ . For simplicity, we use $C$ to represent the conditions $[ I , P , K ]$ . Given $C$ and $\pmb { T } _ { C L } ^ { ( 0 ) }$ , the objective of a cameraLiDAR calibration method $D _ { \theta }$ is to estimate $\boldsymbol { \mathbf { \mathit { T } } } _ { C L } ^ { g t }$ . Since we have known the initial extrinsic matrix TC(0L), we expect $D _ { \theta }$ to output the correction to the left transformation, i.e., $T _ { C L } ^ { \bar { g t } } ( T _ { C L } ^ { ( 0 ) } ) ^ { - 1 }$ . Considering the internal constraints on parameters of this SE(3) matrix are challenging for neural networks to process, we convert it to the Lie algebra form as the desired output of $D _ { \theta }$ : $$ \Delta \pmb { \xi } _ { g t } = \mathcal { G } ^ { - 1 } \left( \pmb { T } _ { C L } ^ { g t } ( \pmb { T } _ { C L } ^ { ( 0 ) } ) ^ { - 1 } \right) \in \mathfrak { s e } ( 3 ) $$ where $\mathcal { G }$ is the exponential map from $\mathfrak { s e } ( 3 )$ to SE(3), and $\mathcal { G } ^ { - 1 }$ is its inverse function. The loss function to supervise $D _ { \theta }$ is: $$ \mathcal { L } ( \Delta \hat { \xi } _ { g t } , \Delta \xi _ { g t } ) = \| \Delta \hat { \xi } _ { g t } - \Delta \xi _ { g t } \| _ { 1 } $$ where $\Delta \hat { \xi } _ { g t }$ denotes the output of $D _ { \theta }$ . To obtain the final estimation for $\pmb { T } _ { C L } ^ { g t }$ , we just need to left multiply the SE(3) output of $D _ { \theta }$ to $\stackrel { } { \pmb { T } } _ { C L } ^ { ( 0 ) }$ as follows: $$ \hat { \pmb { T } } _ { C L } ^ { g t } = \mathcal { G } ( \Delta \hat { \xi } _ { g t } ) \pmb { T } _ { C L } ^ { ( 0 ) } = \mathcal { G } \left( D _ { \theta } ( \pmb { C } , \pmb { T } _ { C L } ^ { ( 0 ) } ) \right) \pmb { T } _ { C L } ^ { ( 0 ) } $$ To extend the above single-step prediction into a naive iterative method (NaIter), the current output can be utilized as the input for the subsequent iteration: $$ \begin{array} { r l } & { \left\{ \hat { T } _ { C L } ^ { ( i ) } = \Delta \hat { T } _ { C L } ^ { ( i ) } \pmb { T } _ { C L } ^ { ( 0 ) } , \Delta \hat { T } _ { C L } ^ { ( 0 ) } = E \right. } \\ & { \left. \Delta \hat { \pmb { T } } _ { C L } ^ { ( i + 1 ) } = \mathcal { G } \left( D _ { \theta } ( \pmb { C } , \hat { \pmb { T } } _ { C L } ^ { ( i ) } ) \right) \Delta \hat { \pmb { T } } _ { C L } ^ { ( i ) } \right. } \end{array} $$ Algorithm 1: Diffusion Process (for training) # B. Linear Surrogate Diffusion 1) Review of Diffusion Models: Diffusion models [48]– [50] is a category of likelihood-based generative models including a forward and reverse process. During the forward process ${ \pmb q } ( { \pmb x } _ { t } | { \pmb x } _ { t - 1 } )$ , noise is progressively added to the sample $\scriptstyle { \mathbf { { \vec { x } } } } _ { 0 }$ to generate noisy sample $\scriptstyle { \pmb { x } } _ { t }$ until transforming it into pure Gaussian noise $\epsilon \sim \mathcal { N } ( 0 , E )$ ( $\scriptstyle { E }$ is an identical matrix). This process can be simplified into a close form expression ${ \pmb q } ( { \pmb x } _ { t } | { \pmb x } _ { 0 } , { \pmb \epsilon } )$ : $$ \pmb { x } _ { t } = \pmb { q } ( \pmb { x } _ { t } | \pmb { x } _ { 0 } , \pmb { \epsilon } ) = \sqrt { \overline { { \alpha } } _ { t } } \pmb { x } _ { 0 } + \sqrt { 1 - \overline { { \alpha } } _ { t } } \pmb { \epsilon } $$ where $\overline { { \alpha } } _ { t }$ is subject to a certain noise schedule. Here we adopt the cosine noise schedule proposed in [51], as formulated in Eq. (6). $$ \begin{array}{c} \begin{array} { r } { \left\{ \overline { { \alpha } } _ { t } = \frac { f ( t ) } { f ( 0 ) } , f ( t ) = c o s \left( \frac { t / T + s } { 1 + s } \cdot \frac { \pi } { 2 } \right) ^ { 2 } \right.} \\ { \alpha _ { t } = 1 - \beta _ { t } , \beta _ { t } = 1 - \frac { \overline { { \alpha } } _ { t } } { \overline { { \alpha } } _ { t - 1 } } } \end{array} \end{array} $$ Assume that the learned network estimates $\scriptstyle { \mathbf { { \vec { x } } } } _ { 0 }$ as $\hat { \pmb x } _ { 0 }$ . The reverse process aims to establish a probability $\pmb q ( \pmb x _ { t - 1 } | \pmb x _ { t } , \hat { \pmb x } _ { 0 } )$ , iteratively recovering $\scriptstyle { \mathbf { { \vec { x } } } } _ { 0 }$ from $\scriptstyle { \mathbf { } } x _ { T }$ . The standard denoising probability diffusion model [48] utilizes a stochastic reverse process formulated as: $$ \pmb { x } _ { t - 1 } = \pmb { q } ( \pmb { x } _ { t - 1 } | \pmb { x } _ { t } , \hat { \pmb { x } } _ { 0 } ) = \pmb { \mu } _ { \theta } ( \pmb { x } _ { t } , \hat { \pmb { x } } _ { 0 } , t ) + \pmb { \Sigma } ( t ) \pmb { \epsilon } $$ where $\mu _ { \boldsymbol { \theta } } ( \mathbf { \Delta } x _ { t } , \hat { \mathbf { \Delta } } x _ { 0 } , t )$ and $\pmb { \Sigma } ( t )$ are formulated as: $$ \mu _ { \theta } ( \pmb { x } _ { t } , \hat { \pmb { x } } _ { 0 } , t ) = \frac { \sqrt { \alpha _ { t } } ( 1 - \overline { { \alpha } } _ { t - 1 } ) \pmb { x } _ { t } + \sqrt { \overline { { \alpha } } _ { t - 1 } } ( 1 - \alpha _ { t } ) \hat { \pmb { x } } _ { 0 } } { 1 - \overline { { \alpha } } _ { t } } $$ $$ \Sigma ( t ) = \frac { ( 1 - \alpha _ { t } ) ( 1 - \overline { { \alpha } } _ { t - 1 } ) } { 1 - \overline { { \alpha } } _ { t } } E $$ 2) Selection of the Diffusion Variable: As shown in Fig. 1, unlike diffusion models for image generation [48], [49], [52], a diffusion model for camera-LiDAR calibration requires denoising on the extrinsic matrix $\pmb { T } _ { C L }$ , which contains internal SE(3) constraints. Another difference is that the initial state of our diffusion should be centered around the initial extrinsic matrix $\pmb { T } _ { C L } ^ { ( 0 ) }$ rather than pure Gaussian noise. Algorithm 2: Reverse Process (for inference) Based on the above analysis, we model our diffusion process on the transformation difference between TCgtL and $\overset { \cdot } { \mathbf { T } } _ { C L } ^ { ( 0 ) }$ and retrieve its Lie algebra form as our variable. In this case, the noisy initial extrinsic matrix can be expressed as $\mathscr { G } ( \mathbf { \mathscr { x } } _ { t } ) \mathbf { \mathscr { T } } _ { C L } ^ { ( 0 ) }$ . As for the boundary constraints, ${ \bf { x } } _ { T }$ is set to 0 to ensurCe $\mathscr { G } ( \pmb { x } _ { T } ) \pmb { T } _ { C L } ^ { ( 0 ) } = \pmb { T } _ { C L } ^ { ( 0 ) }$ , and $\scriptstyle { \mathbf { { \mathit { x } } } } _ { 0 }$ is set to $\Delta \xi _ { g t }$ (defined in Eq. (1))G to satisCfyL $\mathcal { G } ( \pmb { x } _ { 0 } ) \pmb { T } _ { C L } ^ { ( 0 ) } = \pmb { T } _ { C L } ^ { g t }$ . This definition results in ${ \pmb { \epsilon } } = { \pmb { x } } _ { T } = { \bf 0 }$ , suggesting that $\epsilon$ follows a Dirac Distribution $\delta ( \mathbf { 0 } )$ . Although this setting may appear counterintuitive, we can regard it as a general diffusion process defined in [53]. Additionally, the condition $\pm \mathbf { 0 }$ increases the variation of $\Delta \xi _ { g t }$ , which will be adverse to the inverse process. Therefore, we decide to retain the setting of $\pmb { \epsilon } = \pmb { x } _ { T } = \mathbf { 0 }$ . 3) Surrogate Formulation: Inspired by [47], we introduce a surrogate to make our diffusion denoiser-agnostic. The surrogate $S _ { \theta }$ omits the time embedding layer and estimates the transformation difference between $\mathbf { \bar { \Pi } } _ { T _ { C L } } ^ { ( \bar { 0 } ) }$ and $\pmb { T } _ { C L } ^ { g t }$ from the noisy input $\scriptstyle { \pmb x } _ { t }$ , which can be mathematically expressed as $\begin{array} { r } { \hat { \pmb { x } } _ { 0 } = \dot { S _ { \theta } } ( \acute { x _ { t } } , \pmb { C } , \pmb { T } _ { C L } ^ { ( 0 ) } ) = \mathcal { G } ^ { - 1 } ( \hat { \pmb { T } } _ { C L } ^ { g t } ( \pmb { T } _ { C L } ^ { ( 0 ) } ) ^ { - 1 } ) } \end{array}$ . As described in Sec. III-A, $D _ { \theta }$ predicts the transformation difference between $\boldsymbol { \mathbf { \mathit { T } } _ { C L } ^ { g t } }$ and $\Dot { \boldsymbol { T _ { C L } } }$ . Therefore, the relationship of $D _ { \theta }$ and $\hat { \pmb x } _ { 0 }$ can be formulated as: $$ \underbrace { \mathcal { G } ( \hat { x } _ { 0 } ) \mathbf { T } _ { C L } ^ { ( 0 ) } } _ { \hat { T } _ { C L } ^ { g t } } = \underbrace { \mathcal { G } \left( D _ { \theta } ( C , \mathcal { G } ( x _ { t } ) \mathbf { T } _ { C L } ^ { ( 0 ) } ) \right) } _ { D _ { \theta } \mathrm { \ o u t p u t } } \underbrace { \mathcal { G } ( x _ { t } ) \mathbf { T } _ { C L } ^ { ( 0 ) } } _ { D _ { \theta } \mathrm { \ i n p u t } } $$ which can be simplified as below: $$ \hat { \pmb x } _ { 0 } = \mathcal { G } ^ { - 1 } \left( \mathcal { G } \left( D _ { \theta } ( \pmb C , \mathcal { G } ( { \pmb x } _ { t } ) \mathbf { T } _ { C L } ^ { ( 0 ) } ) \right) \mathcal { G } ( { \pmb x } _ { t } ) \right) $$ In this context, the loss function to supervise $D _ { \theta }$ is: $$ \mathcal { L } _ { L S D } ( \hat { \pmb x } _ { 0 } , \pmb x _ { 0 } ) = \| \hat { \pmb x } _ { 0 } - \pmb x _ { 0 } \| _ { 1 } $$ In summary, during the forward process, $\scriptstyle { \pmb x } _ { t }$ is obtained using Eq. (5) and serves as the input of the $S _ { \theta }$ , while $D _ { \theta }$ is supervised by Eq. (12). The entire forward process is summarized in Algorithm 1. Concerning the reverse process, ${ \bf { x } } _ { T }$ is initialized as 0 and progressively recovered into $\scriptstyle { \pmb x } _ { 0 }$ applying Eq. (11) and Eq. (7) alternately. The whole reverse process is outlined in Algorithm 2. For clarity, we take DDPM [48] as an example to introduce our reverse process, but its sampler can be replaced with other efficient ODE solvers such as DPM [49] and UniPC [50]. 4) Intermediate Variable Buffering: Regarding the proposed surrogate model, the initial extrinsic matrix varies with $t$ according to Eq. (11). However, we observe that some intermediate variables remain unchanged from the second iteration so that they can be stored in the first iteration for subsequent reusing. For example, the common operation of CalibNet, RGGNet, LCCNet and LCCRAFT is the image feature extraction, which is independent from $\pmb { T } _ { C L }$ , thus the extracted image feature can be reused after the first iteration. Intermediate variable buffering is implemented during inference. Specifically, in Algorithm 2, it should be employed when $t = T - 1 , . . . , 1$ . We found this modification is also applicable to other iterative techniques and apply it to all of them for fair efficiency comparison. # IV. EXPERIMENTS # A. Dataset Description We conduct calibration experiments on the KITTI Odometdrayt aDawtiatshetco[r1r3e]stphoantdcionng agirnosu2n2d-streuqtuhenexctersinosficamaetriac-eLsi $\pmb { T } _ { C L } ^ { g t }$ and intrinsic matrices $\kappa$ . To generate initial transformations TC(0L) for the inputs, random perturbations are imposed on $\pmb { T } _ { C L } ^ { g t }$ , of which the rotation and translation ranges are respectively set to $\pm 1 5 ^ { \circ }$ and $\pm 1 5 \mathrm { c m }$ on each axis (referred to as $\pm 1 5 ^ { \circ } 1 5 \mathrm { c m }$ hereinafter). For the data division, sequences 00, 02, 03, 04, 05, 06, 07, 08, 10, 12 are chosen for training, sequences 16, 17, 18 for validation, and sequences 13, 14, 15, 20, 21 for testing. # B. Implementation Details The image encoders of CalibNet, RGGNet and LCCNet are all configured to ResNet-18 [45]. Since the public code of LCCRAFT is unavailable, we implemented its image encoder using the default hyperparameters of RAFT [46]. Regarding diffusion settings, $s$ is set to 0.008 in Eq. (6) for our noise schedule. We use the LogSNR sampling scheduler and apply the UniPC [50] sampler to replace DDPM in Algorithm 2 for acceleration. The number of function evaluations (NFE) for all iterative methods is set to 10. To demonstrate the advantages of our LSD approach, we compare it with single-use (Single) defined in Eq. (3) and NaIter formulated in Eq. (4). Additionally, we adapt a surrogate diffusion model, originally used in point cloud registration, to this calibration task for comparative purposes. We refer to this model as non-linear surrogate diffusion (NLSD). The differences among these iterative methods are discussed in Sec. IV-E. # C. Metrics We apply several metrics to comprehensively evaluate the performance of our method and baselines. These metrics are defined based on the SE(3) distance: $$ \varepsilon _ { T } = \hat { T } _ { C L } ^ { g t } ( \pmb { T } _ { C L } ^ { g t } ) ^ { - 1 } \in \mathrm { S E } ( 3 ) $$ To qualify calibration accuracy, we record the Euler angles of each axis $( \mathbf { R } \mathbf { x } , \mathbf { R } \mathbf { y } , \mathbf { R } \mathbf { z } )$ and translation values of each axis (tx, ty, tz) w.r.t. $\varepsilon _ { T }$ , together with rotation and translation root squared mean error (RMSE). To evaluate calibration robustness, another two metrics are designed to illustrate the proportion of valid samples on which the calibration errors are within a certain range. Specifically, the metric $3 ^ { \circ } 3 \mathbf { c m }$ reflects the percentage of samples with rotation and translation RMSE under $3 ^ { \circ }$ and 3cm respectively, and a similar definition applies to $\pmb { 5 } ^ { \circ } \pmb { 5 } \mathbf { c m }$ . Fig. 2. Distribution of Rotation RMSE and Translation RMSE of Different Iterative Methods. Additionally, we evaluated the stability of different iterative methods, which is defined by the degree of monotonic decrease in iteration count and accuracy. Similar to $3 ^ { \circ } 3 \mathrm { c m }$ , a metric named $\rho \%$ is designed to measure the proportion of samples whose rotation RMSE and translation RMSE both satisfy the following equation: $$ \mathrm { R M S E } _ { i = 2 } \geq \mathrm { R M S E } _ { i = 5 } \geq \mathrm { R M S E } _ { i = 1 0 } $$ , where ${ \mathrm { R M S E } } _ { i = k }$ represents the rotation/translation RMSE of the $k ^ { \mathrm { t h } }$ iteration. The above equation reflects a property where the more iterations undergoes, the higher accuracy achieved by the model. # D. Calibration Results 1) Calibration Accuracy: Figure 2 illustrates the distribution of rotation and translation RMSE for Single, NaIter, NLSD, and LSD. For rotation RMSE, LSD consistently outperforms the other iterative methods across all surrogates. NaIter exhibits the poorest performance and the largest variation in most cases, except for CalibNet. NLSD does not consistently outperform Single across all surrogates. It performs better than Single in CalibNet and LCCNet but underperforms in RGGNet and LCCRAFT. In terms of translation RMSE, LSD demonstrates superior performance in LCCNet and LCCRAFT, though its advantage over Single is not as pronounced as in rotation RMSE. The median errors and variations for NLSD are higher compared to LSD. NaIter again performs the worst across all surrogates, although its variation is close to those of other iterative methods. Fig. 3. LiDAR projection maps of different iterative methods (from up to bottom: NaIter, LSD, NLSD). In addition to the initial state common to al three methods, we sampled three intermediate results at ${ \mathrm { N F E } } { = } 2$ , 5, and 10 over ten steps to facilitate comparison. Significant differences in their fina states $( { \mathrm { N F E } } { = } 1 0 )$ ) are highlighted with yellow rectangles. The ground-truth calibrated state is also provided for reference. TABLE I CALIBRATION ROBUSTNESS AND STABILITY 2) Calibration Robustness and Stability: On top of accuracy, we also compare the robustness and stability of these iterative methods in Tab. I. The results indicate that LSD surpasses the other two iterative methods across all three metrics, with a particularly significant advantage in terms of $\rho \%$ . In contrast, NaIter is the most unstable iterative method and lacks robustness. While NLSD exhibits improved robustness over Single on CalibNet and LCCNet, it does not show similar improvements on the other two surrogates. Furthermore, the $\rho \%$ of NLSD remains notably inferior to that of LSD. # E. Differences of Three Iterative Methods To qualitatively illustrate the differences in terms of iteration process among these methods, we draw LiDAR projection maps of an urban calibration scene over the course of the entire iterative calibration in Fig. 3. Although NaIter and NLSD converge faster than LSD, the latter achieves superior final accuracy. The yellow rectangles in Fig. 3 indicate that several critical edges are better aligned using LSD compared to NLSD and NaIter. Furthermore, the corresponding error curves are plotted in Fig. 4. The errors of six axes all basically decrease with the NFE using LSD, which is an advantage not observed with NLSD and NaIter. Fig. 4. Error curves of different iterative methods w.r.t. an example scene. The $\mathbf { x }$ and y axes respectively denote NFE and the magnitude of error. From a theoretical perspective, Naiter simply calls $D _ { \theta } ( \cdot )$ repeatedly to refine the current extrinsic matrix. In contrast, both NLSD and LSD formulate the entire iterative calibration problem as a diffusion process where each correction step is treated as a single denoising step, leading to a more accurate and stable iterative process. The key differences between NLSD and LSD are listed as follows: first, NLSD defines the diffusion variable in the SE(3) space, whereas LSD does so in the $\mathfrak { s e } ( 3 )$ space; second, in generating $\scriptstyle { \pmb { x } } _ { t }$ , NLSD employs a combination of nonlinear perturbation and interpolation, while LSD relies solely on linear interpolation; third, their posterior distributions differ. Following the conventions in [47], NLSD transforms both $\pmb { H } _ { 0 }$ and ${ \mathbf { } } H _ { t }$ into the $\mathfrak { s e } ( 3 )$ space for combinations, and then maps the result back to the SE(3) space to obtain $\pmb { H } _ { t - 1 }$ , whereas LSD directly derives $\pmb { x } _ { t - 1 }$ through a linear combination of $\scriptstyle { \mathbf { { \mathit { x } } } } _ { 0 }$ and $\scriptstyle { \mathbf { { \boldsymbol { x } } } } _ { t }$ . We attribute the superior performance of LSD over NLSD to two main factors. First, due to the linearity of the diffusion variable, LSD’s reverse process can be naturally formulated as an ODE process, leading to improved numerical accuracy—an advantage that is not applicable to NLSD because the computation of posterior $\pmb { H } _ { t - 1 }$ is nonlinear. Second, due to the linear interpolation in the $\mathfrak { s e } ( 3 )$ space, LSD avoids taking excessively large correction steps at the early iterations, thereby preserving room for further refinement if the initial prediction is insufficiently accurate. # F. Efficiency Test We present the inference time per batch (with a batch size of 16) for each model in Table II. All tests were conducted on a computer equipped with an NVIDIA RTX 4060 Laptop GPU and an Intel i7-12650H CPU. Since NaIter primarily involves repeated computations of $D _ { \theta }$ with minimal additional operations, comparing the execution speed of the Single and NaIter models provides a fair assessment of the efficiency improvements achieved by our proposed buffering technique. Theoretically, NaIter’s inference time should be at least ten times that of the single-step model; however, in practice, the real inference time is significantly shorter due to the buffering technique. This technique reduces inference time by $2 1 . 3 5 \%$ (LCCRAFT) to $5 1 . 1 5 \%$ (CalibNet). Compared to NaIter, the implementation of LSD and NLSD introduces a moderate increase in computational time due to additional computations required by the noise scheduler. LSD incurs a slightly higher overhead due to the numerical approximation steps in the ODE solver. TABLE II INFERENCE TIME (MS) PER BATCH FOR EACH MODEL
Cameras and LiDAR are essential sensors for autonomous vehicles. The fusion of camera and LiDAR data addresses the limitations of individual sensors but relies on precise extrinsic calibration. Recently, numerous end-to-end calibration methods have been proposed; however, most predict extrinsic parameters in a single step and lack iterative optimization capabilities. To address the increasing demand for higher accuracy, we propose a versatile iterative framework based on surrogate diffusion. This framework can enhance the performance of any calibration method without requiring architectural modifications. Specifically, the initial extrinsic parameters undergo iterative refinement through a denoising process, in which the original calibration method serves as a surrogate denoiser to estimate the final extrinsics at each step. For comparative analysis, we selected four state-of-the-art calibration methods as surrogate denoisers and compared the results of our diffusion process with those of two other iterative approaches. Extensive experiments demonstrate that when integrated with our diffusion model, all calibration methods achieve higher accuracy, improved robustness, and greater stability compared to other iterative techniques and their single-step counterparts.
[ "cs.CV" ]
# 1 INTRODUCTION Video data is pervasive in today’s world, playing a critical role in various applications such as autonomous machines [23, 29, 32], education [28], healthcare [82], security and surveillance [51, 92], e-commerce [118, 120], and many others [56, 67]. Recently, VideoLanguage Models (VideoLMs) have demonstrated remarkable capabilities, offering significant potential for flexible and powerful video analytics tasks [19, 31, 39, 110, 111, 122]. While these powerful algorithmic advances promise enormous potential, the massive computational demand poses a significant hurdle for the widespread adoption of VideoLMs for video query systems. The excessive computational demand arises from two primary factors. First, modern VideoLMs heavily rely on Vision Transformers (ViTs) [25] as their core engine for visual feature extraction. ViTs contain hundreds of millions of parameters and require tens of billions of FLOPs for each inference. Second, ViT inference must be performed iteratively across numerous video frames, which can be exceptionally large in number (e.g., a 1-hour video sampled at 2 FPS contains 7,200 frames). Consequently, utilizing VideoLMs for large-scale video analytics becomes nearly infeasible, even with multi-GPU datacenter resources. Figure 1 provides an overview of video-language query systems, where ViTs process video frames to extract visual embeddings, which are then fed into query-specific operations such as video retrieval, video question answering, and video question grounding. The critical role of ViTs in this process emphasizes the importance of improving ViT inference efficiency to unleash the potential of VideoLMs on building video-language query systems. Figure 1: Overview of video language query systems supporting three example VideoLMs. Modern VideoLMs employ vision transformer (ViT) as their core engine. Predictably, the vital need for efficient ViT inferencing has led to a large body of acceleration work in recent years [8, 12, 14, 24, 27, 47, 60, 62, 65, 76, 78, 94]. Most of these works focus on intraframe computation reuse, targeting inter-token feature similarities since they aim to speed up a single ViT inference run processing one image input. While effective for single images, these methods overlook additional opportunities present in video data that exhibit inter-frame similarities across multiple consecutive frames. Recently, several works [17, 26, 86] have pioneered the exploitation of inter-frame computation reuse for ViT inferencing on video applications. However, these works have the following two limitations, which leave them in a position of questionable utility for most practical use cases: Limitation 1: These methods manually identify reuse opportunities for given models and fixedly enable computation reuse, making it considerably difficult to locate the sweet spot on the accuracy-reuse tradeoff space. Limitation 2: Although these techniques enable significant FLOP reduction, they do not yield corresponding performance gains, as the remaining computations are sparse and, therefore, inefficient on GPUs. To address these limitations, this work proposes Déjà Vu, an algorithm-system co-designed query engine that leverages a learningbased approach to automatically identify the computation reuse opportunities and find the sweet spot in the tradeoff space between accuracy and reuse. The key challenges are (1) to maximize the reuse opportunities without causing significant accuracy loss in VideoLM applications and (2) to effectively translate the FLOPs savings into performance gains. Déjà Vu tackles these challenges by making the following two key contributions, each corresponding to the two limitations: (1) ReuseViT: Reuse-enabling ViT model. We develop a modified ViT model, referred to as ReuseViT, that reuses precomputed values for similar tokens between frames. To maximize reuse opportunities, we devise a frame referencing strategy that determines inter-frame reference chains for computation reuse. ReuseViT contains Decision Layers that dynamically determine when to reuse computations by processing multiple inputs such as cosine similarity, attention scores, codec metadata, and reference types, allowing it to learn the importance and interactions of these hints automatically. Additionally, it employs Restoration Layers that learn to calibrate the reused computation values based on the changes occurred in the current frame. To train ReuseViT, we use reparameterization with soft gating via the Gumbel-Softmax mechanism, enabling backpropagation through discrete decisions. This allows the model to mimic hard gating, allowing gradients to flow. Further, we model error propagation by training with grouped frames to mitigate error accumulation from reusing computations across multiple frames. (2) Memory and compute compactions for fast and efficient ReuseViT inferencing. To achieve significant performance gains from these algorithmic innovations, we propose three key systemlevel techniques. (1) We introduce layer-wise computation scheduling, which processes computations for multiple frames in a layer-by-layer manner. (2) This approach enables cached memory compaction, a memory management technique that clears caches after each layer is completed, allowing for optimized memory usage. Consequently, this increases the batch size during inference, leading to better hardware utilization. (3) Additionally, we implement sparse computation compaction, which restructures irregular data patterns caused by computation reuse into dense computations suitable for efficient GPU execution. By consolidating tokens from multiple frames, we create more regular matrices, improving the efficiency of matrix multiplication operations on GPUs, particularly when reuse rates are high. To demonstrate the effectiveness of Déjà $\mathsf { v } _ { \mathsf { u } }$ , we perform evaluations using three different VideoLMs – (1) Video Retrieval, (2) Video Question Answering, and (3) Video Question Grounding – on their respective datasets. We observe that Déjà $\mathsf { v } _ { \mathsf { u } }$ achieves throughput improvement of $1 . 8 1 \times , 2 . 6 4 \times$ and $2 . 5 4 \times$ for the three tasks, respectively, within $2 \%$ error bound. Under the same accuracy constraints, the state-of-the-art systems using inter-frame computation reuse offer only $1 . 3 2 \times$ , $2 . 0 8 \times$ , $2 . 2 0 \times$ speedups for the respective tasks. These significant performance gains, coupled with minimal accuracy loss, underscore the solution’s potential to bridge the computational gap, paving the way for VideoLMs to unlock new capabilities. # 2 BACKGROUND AND MOTIVATION # 2.1 Video Query Processing A rich body of research explores accelerating large-scale video queries using AI models. We categorize them into three groups. Task-specific CNN pipelines. Early approaches [11, 41, 42, 48] accelerate queries by training small, query-specific models as approximations for computationally expensive deep learning pipelines. Systems such as NoScope [42] and BlazeIt [41] train lightweight classifiers or regression models tailored to each query, significantly reducing inference costs for known object classes. However, these methods are inherently inflexible, as they require model training or selection for every new query, making them unsuitable for quickly adapting to new or changing requirements. Task-agnostic proxy embeddings. The second category [4, 43] precomputes a single embedding for video frames to allow fast similarity-based retrieval without query-specific training. Methods ViT Non-ViT 1.6 GFLOPs CLIP4Clip 53 GFLOPs FrozenBiLM 810 GFLOPs 370 GFLOPs TempCLIP 1696 GFLOPs Normalized FLOPs (%) 16.7 GFLOPs such as TASTI [43] cluster frames in a learned embedding space, allowing queries to be answered using indexed embeddings rather than full model inference. While these approaches eliminate the perquery model burden, they may struggle with open-ended queries (i.e., broad or natural-language requests) if novel or uncommon concepts are poorly represented in precomputed embeddings. Vision-Language Pretrained (VLP) embeddings. More recent approaches [66, 80] provide a more general solution by leveraging VLP models like CLIP [77], which map image and text queries into a shared semantic space. These models support open-ended, naturallanguage-based video queries without requiring predefined object classes or embedding indexes. However, recent works [66, 101] illustrate that relying solely on models has limitations in practical video retrieval scenarios. While VLP models exhibit strong generalization capabilities, VLP models often struggle with uncommon or domain-specific queries where training data is sparse. To address these concerns, Déjà Vu leverages VideoLMs, advanced models built on top of VLP architectures that specialize in understanding video context. # 2.2 Video-Language Models (VideoLMs) VideoLMs extend VLP architectures to support complex tasks including video retrieval, video question answering, and video question grounding. For instance, video retrieval involves matching a textual description (e.g., "Show me a clip of scenery of underwater") to the most relevant video in a large corpus. Video question answering asks natural-language questions about the visual content, such as “How many sea turtles appear?”, requiring the model to understand both objects and context. Computational challenges. Despite their impressive capabilities, the high computational complexity of VideoLMs remains a critical hurdle for potential applications. A major contributor to VideoLM overhead is the embedding generation process using large-scale VLP models [37, 45, 50, 52, 53, 77, 83, 85, 88, 114], which commonly rely on Vision Transformer (ViT) [25] architectures. ViTs often comprise hundreds of millions of parameters and require billions of FLOPs per inference which need to run on numerous video frames. Figure 2 illustrates the computational breakdown for three VideoLMs. Notably, embedding generation via ViT dominates the overall FLOPs, underscoring the need for efficient ViT acceleration. # 2.3 Vision Transformer (ViT) Acceleration Architecture of ViT. Figure 3 illustrates ViT architecture, which adapts the transformer architecture from natural language processing to the vision domain. In ViTs, an input image or video frame is partitioned into fixed-size patches, which are linearly embedded to form a sequence of tokens. A class token (CLS) is appended to this sequence to aggregate global information. The tokens pass through Transformer encoder layers, performing query-key-value projections, multi-head self-attention, and feed-forward network. Figure 2: FLOPs breakdown across three VideoLM tasks: video retrieval (CLIP4Clip), video question answering (FrozenBiLM), and video question grounding (TempCLIP). Figure 3: ViT model architecture. Existing acceleration methods. To address the computational demands of ViTs, various methods exploit redundancies in the data. Acceleration techniques focusing on inter-token redundancies reduce computation by pruning less important tokens or merging similar ones within a single image [8, 12, 14, 24, 27, 47, 60, 62, 65, 76, 78, 94]. While these pruning and merging methods effectively skip unnecessary computations within a single image or frame, they ignore the redundancy across frames. In large-scale video analytics, small frame-to-frame changes can be exploited to a much greater extent. Hence, single-image token pruning might still repeat most computations for each of these frames, making it less effective for long videos in which large swaths of patches remain nearly identical over short time intervals. More recently, other methods have attempted to leverage interframe computation reuse in video data [17, 26, 86], offering greater computational savings. Among them, CMC [86] and Eventful Transformer [26] are most relevant to Déjà Vu, as they explicitly reuse partial computations across frames. In contrast, vid-TLDR [17] merges tokens temporally, reducing redundancy through token aggregation rather than direct computation reuse. We provide detailed comparisons against CMC and Eventful Transformer in Section 7.1. # 2.4 Limitations of Video ViT Acceleration While promising, video-targeted acceleration methods exhibit limitations that hinder their practical utility. Challenges in balancing reuse and accuracy. Existing methods rely on manually designed strategies for computation reuse, which can make it substantially difficult to locate the optimal balance between accuracy and computational savings. For instance, Eventful Transformer [26] requires fixing the number of tokens that reuse computation at each layer. Due to the large search space created by the tens of encoder layers and the variability in video content, it is challenging to predict the consequences of increasing reuse in certain layers. As a result, these methods may lead to a suboptimal tradeoff between computation reuse and accuracy. Current Frame Cached Frame Reference Computation Ref. Computation Future Order Type Order +4 1 I 1 1 2 P 1 5 Distance 0 1 2 3 Bdist2 1 3 5 1 2 3 Bdist1 1 2 3 5 1 2 3 4 5 Bdist1 1 2 3 4 5 Past (a) Sequential Computation (b) Reordered Computation Challenges in realizing FLOP savings as speedups. While reducing computational complexity (FLOPs) is crucial, achieving actual speedups requires addressing runtime factors. Mixing computed and reused tokens leads to sparse computations, causing inefficiencies on GPUs optimized for dense workloads. Our empirical analysis shows that existing video-targeted ViT acceleration techniques [26, 86] deliver limited speedups due to overheads in memory usage, data movement, and hardware utilization (see Section 7). Furthermore, some prior acceleration works [86] rely on specialized hardware accelerators, which are not readily available in standard commodity systems, limiting the accessibility. These limitations motivate us to design a customized ViT model, dubbed ReuseViT, which can automatically identify computation reuse opportunities in video data, while carefully balancing the accuracy-reuse tradeoff. Additionally, we introduce memorycompute joint compaction techniques to effectively convert ReuseViT’s FLOP savings into tangible performance gains. # 3 MODEL ARCHITECTURE OF REUSEVIT To address computational challenges in ViT-based VLMs, we propose ReuseViT, a model that automatically identifies safe (accuracypreserving) computation reuse opportunities to accelerate inference. ReuseViT is designed to maximally reduce redundant computations while maintaining high accuracy by leveraging inter-frame similarities in video data. # 3.1 Frame Selection for Computation Reuse Determining which frames to reference is critical to maximizing computation reuse opportunities. In video data, especially at low frame rates common in VideoLM tasks, frame contents can drift significantly over time. Hence, ReuseViT reorders frames to allow referencing both the past and future frames, potentially boosting the likelihood of finding similar content. Sequential frame computation. Figure 4(a) illustrates a basic strategy where each new frame references one or more preceding frames. Because nearer frames generally exhibit higher temporal similarity, referencing them provides reuse benefits. However, including frames further back tends to yield diminishing returns due to overlapping patches that offer little additional information. Figure 4: (a) A basic approach where each frame references preceding frames sequentially, (b) Our proposed reordering where frames reference both past and future frames. Figure 5: FLOPs breakdown of core computations within a single encoder layer of vision transformers at different scale. Reordered frame computation. To harness both past and future frames, ReuseViT processes frames in an out-of-order fashion, as shown in Figure 4(b). We categorize frames into four types, I-frames (computed independently), P-frames (referencing a previous frame), $\mathrm { B _ { d i s t 2 } }$ -frames (referencing frames two steps away), and $\mathrm { B _ { d i s t 1 } }$ -frames (referencing immediate neighbors), reflecting terminology akin to video codecs. Following a pattern of $\mathrm { I } ( P $ $B _ { \mathrm { d i s t 2 } } \to B _ { \mathrm { d i s t 1 } } \to B _ { \mathrm { d i s t 1 } } ) \to . . .$ helps capture bidirectional temporal redundancies that purely sequential schemes may overlook. By referencing both directions, we increase the potential for reuse and reduce the frequency of full computations. When choosing references, ReuseViT evaluates the quality and temporal distance of candidate frames to preserve accuracy without incurring excessive overhead. Subsequent sections detail this decision-making process. # 3.2 Layer Selection for Reuse Deciding which layers within the ViT architecture are suitable for computation reuse is crucial for maximizing efficiency gains without compromising performance. FLOPs breakdown of ViT layers. Figure 5 presents the FLOPs breakdown analysis results for a transformer encoder layer of three different ViT variants. The results suggest that the query-key-value (QKV) projection and the feed-forward network (FFN) are the primary consumers of computational resources in ViTs. Unlike large language models (LLMs), where the self-attention layer is the main bottleneck due to longer sequence lengths, ViTs process shorter sequences (around 256 patches). Thus, in ViTs, the self-attention layer contributes much less to the overall computational cost relative to the QKV projection and FFN layers. Computation patterns of layers. Besides computational cost, we consider computation patterns of each layer. The QKV projection and FFN operations are applied independently to each token, without inter-token dependencies, suiting them for computation reuse. In contrast, the self-attention operation involves interactions among all tokens, making reuse more challenging and potentially affecting model output. Given these considerations, we focus on reusing computations in the QKV projection and FFN layers, targeting the most computationally intensive and token-independent operations. # 3.3 Criteria for Computations Reuse An essential aspect of ReuseViT is determining when to reuse computations for specific tokens. We introduce two key components: (1) a Decision Layer that decides whether to reuse computations, Input Tokens ReTfoekrensce CToukrrenst 1 Similarity Calculation 2 Reuse Decision Encoder Reuse SimA.DtReCtecailnusctsiiuoeolantion 21 SimA.DteCtecailnsctiiuolantion ReTfRoekerfenrsce Past MSioms.ti78laS.rii59tmi ielsar ReIfmerpe.Co7norct.da9enTcyepe Refer CEeen 文 LDaeyceisrion R1eu1se : FeNedetFworrwksard FeNedetFworrwksard 3 Token Filtration 4 Restoration Reuse √ Encoder N+1 QKV QKV Most Similar Tokens Diff □ Generation Reuse Generation 1 Restoration Restored FOeuattpuurte 4 Restoration 2Reuse1Ma0p Filter X Layer + Tokens Restoration 1 Filter Attention Attention Current Feed Forward QKV 1 1 1 X Networks T Generation : : and (2) a Restoration Layer that calibrates reused computations to align with the current frame’s context. Figure 6 depicts the overall architecture of ReuseViT. Decision layer. The decision layer identifies tokens suitable for computation reuse by assessing multiple informative cues for each token. By integrating various inputs, the layer makes nuanced decisions without relying on hand-crafted rules. The decision layer takes as input a concatenation of the following features: Similarity Measure (𝑠): We compute the cosine similarity between the current token and corresponding tokens in the reference frames. A higher similarity suggests that the token content has remained unchanged, making it a candidate for reuse: $$ s _ { i } = \operatorname* { m a x } \left( \cos \left( T _ { i } ^ { \mathrm { c u r } } , T _ { i } ^ { \mathrm { p a s t } } \right) , \cos \left( T _ { i } ^ { \mathrm { c u r } } , T _ { i } ^ { \mathrm { f u t u r e } } \right) \right) $$ Here, $T _ { i } ^ { \mathrm { c u r } } \in \mathbb { R } ^ { D }$ represents the $i$ -th token of the current frame, while 𝑇 past and $T _ { i } ^ { \mathrm { f u t u r e } }$ are the corresponding tokens from the previous and future reference frames. This strategy follows prior work on token merging using similarity metrics [8, 14, 24]. Token Importance $\mathbf { \rho } ( t )$ : We use the attention weights from the class token to estimate each token’s importance. Tokens with higher attention are more critical and may require fresh computation. This method is consistent with existing approaches that prune tokens based on class-token attention [14, 27, 62, 108]. Reference Type (𝑟 ): Reference frame type (e.g., I-frame, P-frame, $\mathrm { \bf B } _ { \mathrm { d i s t 1 } }$ , $\mathrm { B _ { d i s t 2 } } ^ { \prime } .$ ) offers insight into the reference’s temporal proximity, aiding assessment of reused computation reliability Codec Metadata (𝑐): Metadata from video codecs offers blockwise hints about spatio-temporal redundancies, providing insights into areas where video content undergoes motion or structural changes. These metadata signals, such as motion vectors and residuals, have been leveraged in prior research [36, 115, 124] to guide computational optimizations. By combining these inputs, the decision layer, implemented as a simple two-layer MLP, makes informed decisions. $$ \begin{array} { r l } & { v _ { i } = \mathrm { c o n c a t } ( s _ { i } , t _ { i } , r _ { i } , c _ { i } ) , \quad i = 1 , \dots , N } \\ & { d _ { i } = \mathrm { M L P } _ { \mathrm { d e c i s i o n } } ( v _ { i } ) } \\ & { \mathcal { M } _ { i } = \left\{ \begin{array} { l l } { 1 , } & { \mathrm { i f } \ d _ { i } > 0 } \\ { 0 , } & { \mathrm { o t h e r w i s e } } \end{array} \right. , \quad i = 1 , \dots , N } \end{array} $$ where $\boldsymbol { { M } } _ { i } \in \boldsymbol { 0 } , 1$ indicates whether to reuse (1) or recompute (0) the computations for token 𝑖. By combining similarity scores with token importance and codec metadata, the decision layer captures both visual and semantic cues, striking a balance between accuracy and reuse. This data-driven approach enables the model to adapt effectively to diverse video scenarios without manual tuning of thresholds or heuristics. Token filtration. Based on the reuse map $\mathcal { M }$ , tokens are partitioned into recompute tokens $C$ and reuse tokens $R$ . $$ \begin{array} { l } { { C = T _ { i } ^ { \mathrm { c u r } } \mid \mathcal { M } _ { i } = 0 , \quad \mathrm { ( r e \underline { { { c } } } o m p u t e ~ t o k e n s ) } } } \\ { { R = T _ { i } ^ { \mathrm { c u r } } \mid \mathcal { M } _ { i } = 1 , \quad \mathrm { ( \underline { { { r } } } e u s e ~ t o k e n s ) } } } \end{array} $$ Recompute tokens $C$ undergo the standard feed forward network and query-key-value projection. $$ \tilde { C } ^ { \mathrm { c u r } } = \mathrm { Q K V } \left( \mathrm { F F N } \left( C ^ { \mathrm { c u r } } \right) \right) $$ Restoration layer. To adjust for discrepancies between reused computations and the current frame, we introduce a restoration layer that calibrates the reused tokens. For each reuse token $i$ , we compute the difference between the tokens. $$ \Delta R _ { i } = R _ { i } ^ { \mathrm { c u r } } - R _ { i } ^ { \mathrm { r e f } } $$ This difference captures changes between the current and reference tokens. The restoration layer processes $\Delta R _ { i }$ using a small twolayer MLP with a hidden size of 128, which is significantly smaller than the hidden size of 1,024 used in the FFN layers. This design choice reduces computational overhead while efficiently obtaining a calibration value for the change in the input. $$ \begin{array} { r } { \hat { R } _ { i } ^ { \mathrm { c u r } } = \tilde { R } _ { i } ^ { \mathrm { r e f } } + \mathrm { M L P } _ { \mathrm { r e s t o r a t i o n } } ( \Delta R _ { i } ) } \end{array} $$ where ${ \tilde { R } } _ { i } ^ { \mathrm { r e f } }$ is the reused computation from the reference frame, and $\hat { R } _ { i } ^ { \mathrm { c u r } }$ is the calibrated token for the current frame. This calibration effectively improves accuracy with minimal computational overhead $4 \%$ compared to standard QKV and FFN computation). Token Reconstruction We reconstruct the full set of tokens $\hat { T } ^ { \mathrm { c u r } }$ by combining the recomputed tokens $\tilde { C } ^ { \mathrm { c u r } }$ and the reused tokens $\hat { R } ^ { \mathrm { c u r } }$ according to the reuse map $\mathcal { M }$ . $$ \hat { T } _ { i } ^ { \mathrm { c u r } } = \left\{ \begin{array} { l l } { \tilde { C } _ { j } ^ { \mathrm { c u r } } , } & { \mathrm { i f } \ M _ { i } = 0 } \\ { \hat { R } _ { k } ^ { \mathrm { c u r } } , } & { \mathrm { i f } \ M _ { i } = 1 } \end{array} \right. $$ where $i$ indexes the original token sequence, and $j$ and $k$ index the recomputed and calibrated reused tokens, respectively. This process maintains the original order of tokens, preserving positional relationships within the model. The low-complexity design of the decision and restoration modules (e.g., small MLPs) ensures modest overhead relative to full inference, yielding significant net savings in large-scale deployments despite the small cost of restoring reused tokens. # 4 TRAINING REUSEVIT ReuseViT’s efficiency requires training its decision and restoration layers for a given ViT model and dataset, keeping the pre-trained ViT model frozen. This section outlines these training techniques, focusing on handling discrete reuse decisions and modeling error accumulation through grouped frame training. # 4.1 Handling Discrete Reuse Decisions A key training challenge for ReuseViT is handling discrete reuse decisions, which obstruct gradient flow through the gating mechanism during backpropagation. Specifically, the binary decisions to reuse or recompute computations for tokens hinder gradient-based optimization essential for training. To enable gradient flow through the gating mechanism, we approximate the hard binary decisions with continuous values during training. This soft gating mechanism allows the decision layer to be trained end-to-end using backpropagation without the need for specialized techniques. By replacing the hard decisions in Equation 4 with continuous approximations, we ensure that gradients can propagate through the layer, facilitating effective learning. $$ \begin{array} { r l } & { \mathcal { M } _ { \mathrm { s o f t } } = \mathrm { G u m b e l S o f t m a x } \left( \mathrm { M L P } _ { \mathrm { d e c i s i o n } } ( v ) \right) \approx \left[ 0 , 1 \right] ^ { N } } \\ & { \quad \hat { T } _ { \mathrm { s o f t } } ^ { \mathrm { c u r } } = \mathcal { M } _ { \mathrm { s o f t } } \odot \hat { T } ^ { \mathrm { c u r } } + ( 1 - \mathcal { M } _ { \mathrm { s o f t } } ) \odot \tilde { T } ^ { \mathrm { c u r } } } \end{array} $$ Here, $\odot$ denotes element-wise multiplication, $M _ { \mathrm { { s o f t } } }$ is the soft mask for all $N$ tokens, ${ \hat { T } } ^ { \mathrm { r e f } }$ represents the reused computations from reference frames, and $\tilde { T } ^ { \mathrm { c u r } }$ represents the recomputed tokens of the current frame. We found that gradually lowering the GumbelSoftmax temperature over the course of training helps the model transition smoothly from a fully soft gating regime toward more selective token reuse. This annealing avoids sudden spikes in the reuse mask while preserving stable gradient flow. At inference time, we apply the hard decisions to realize the computational savings achieved by ReuseViT. # 4.2 Loss Function for Accuracy and Efficiency To train ReuseViT effectively, we formulate a loss function balancing model accuracy and computational efficiency, aiming for performance gains without degradation in prediction quality. Training objective. Our objective is to ensure that the approximated features produced by ReuseViT remain close to the original features generated by the unmodified ViT model. By preserving the original embeddings, we maintain the model’s predictive performance across different tasks without the need for task-specific fine-tuning. This self-supervised approach allows the model to generalize effectively in practical systems. Similarity loss. We use the cosine similarity metric to encourage the approximated features to remain close to the original features of the VLP model. $$ \mathcal { L } _ { \mathrm { s i m } } = 1 - \cos \left( Z ^ { \mathrm { c u r } } , \hat { Z } ^ { \mathrm { c u r } } \right) $$ Here, $Z ^ { \mathrm { c u r } }$ is the original final feature vector, and $\hat { Z } ^ { \mathrm { c u r } }$ is the corresponding feature vector after computation reuse and calibration. However, using only the similarity loss may discourage computation reuse, as the model can minimize loss by recomputing all tokens, negating efficiency gains. Reuse loss. To incentivize reuse, we introduce a reuse loss. $$ \mathcal { L } _ { \mathrm { r e u s e } } = \frac { 1 } { L N } \sum _ { l = 1 } ^ { L } \sum _ { i = 1 } ^ { N } M _ { l , i } $$ This loss term represents the average reuse rate across all tokens and layers. By maximizing $\scriptstyle { \mathcal { L } } _ { \mathrm { r e u s e } }$ , we encourage the model to reuse more computations, promoting computational efficiency. Combined loss function. The overall loss function combines the similarity loss and the reuse loss: $$ \mathcal { L } = \mathcal { L } _ { \mathrm { s i m } } + \alpha \cdot \mathrm { m a x } \left( 0 , R _ { \mathrm { t a r g e t } } - \mathcal { L } _ { \mathrm { r e u s e } } \right) $$ Here, $\alpha$ is a weighting hyperparameter that balances the trade-off between accuracy (controlled by $\mathcal { L } _ { \mathrm { s i m . } }$ ) and computational efficiency (encouraged by $\scriptstyle { \mathcal { L } } _ { \mathrm { r e u s e } } )$ ), and $R _ { \mathrm { t a r g e t } }$ is the target reuse rate. The max ensures that the penalty is applied only when the reuse rate falls below the target, thus incentivizing the model to meet $R _ { \mathrm { t a r g e t } }$ . The combined loss function effectively prevents the model from trivially minimizing error through complete recomputation. # 4.3 Grouped Frame Training for Robustness Errors accumulate over time when computations are reused across multiple frames. To ensure robustness, we model error accumulation during training by adopting a grouped frame training strategy. Modeling error accumulation. During training, we adjust our loss functions to account for grouped frames. Specifically, we compute the losses over groups of frames by averaging the similarity loss $\mathcal { L } _ { \mathrm { s i m } }$ and the reuse loss $\mathcal { L } _ { \mathrm { r e u s e } }$ across all frames in the group. By considering the average losses over the group, the model learns to handle error accumulation effectively over sequences of frames, rather than optimizing for individual frames in isolation. This approach ensures that the model’s performance and reuse behavior are optimized not just for single frames, but for sequences where errors may propagate due to computation reuse. Figure 7: Illustration of our frame-grouping strategy during training. We treat each “I” or “P” frame as a segment boundary and form groups allowing the model to learn error accumulation. Figure 8: Comparison of (a) frame-wise scheduling and (b) layer-wise scheduling for caching intermediate activations. Efficient grouping strategy. To balance modeling accuracy and training efficiency, we design training sequences that model error accumulation over a manageable number of frames. We use I-frames and P-frames as segment boundaries, dividing frames into segments (e.g., Segment 1 and Segment 2 in Figure 7). Frames within a segment can influence each other through computation reuse, while intermediate frames across segments remain independent, allowing for efficient modeling of longer sequences. By constructing sequences like I-P-P, corresponding to frames 1, 5, and 9, we simulate error propagation over longer intervals without including intermediate frames (2–4 and 6–8). Within the last segment, we also include frames with remaining $\mathrm { B } _ { \mathrm { d i s t 2 } }$ and $\mathrm { B _ { d i s t 1 } }$ reference types from the previous segment, so that ReuseViT is able to learn all reference types. Empirical results indicate that grouping six frames in the pattern 1-5-9-13-11-12 strikes an effective balance between accuracy and training efficiency, allowing the model to account for error accumulation over significant temporal spans while managing computational costs. During training, we also experimented with varying group sizes and found that 6 frame groupings gave us the best empirical tradeoff between training time and error-accumulation robustness. Larger groupings occasionally made optimization less stable, while smaller ones did not fully capture longer-range drifts. # 5 COMPACTION TECHNIQUES IN DÉJÀ VU In this section, we present the compaction methods used to translate the FLOPs reduction achieved by ReuseViT into tangible performance gains. These methods address practical challenges in implementing ReuseViT efficiently on GPU hardware, where parallelism is crucial to achieving high throughput. # 5.1 Layer-Wise Scheduling A key enabler of our compaction methods is layer-wise scheduling. While the target problem differs, our scheduling scheme is inspired by FlexGen [84], a prior work that aims to deploy a large language model (LLM) serving on a single GPU. FlexGen proposes grouping multiple batches for LLM inference and iterates the batches in a round-robin to execute their computations layer-by-layer. That way, FlexGen can amortize the $\mathrm { I } / \mathrm { O }$ costs across multiple batches, required for model parameters and KV cache. On the other hand, Déjà $\mathsf { v } _ { \mathsf { u } }$ has different objectives since it aims to accelerate ViT encoders, which have smaller model sizes than LLMs and do not need KV cache. Déjà Vu employs the layer-bylayer computation scheme to mitigate the memory bloating and Input Frame Cached Act. Current Target $\cdot$ Dependency Layer 1 Layer 2 Layer 3 Layer (N-2) 3 Layer (N-1) Layer N (a) Frame-wise Scheduling (b) Layer-wise Scheduling GPU unfriendliness problems caused by ReuseViT’s computation reuse. Instead of interleaving multiple batches as in FlexGen, Déjà $\mathsf { v } _ { \mathsf { u } }$ batches multiple frames together and processes the same layer across these frames in a layer-by-layer manner within the same segment. Building upon this layer-wise scheduling, Déjà Vu employs Cached Memory Compaction and Sparse Computation Compaction to more efficiently utilize GPU memory and compute resources. # 5.2 Cached Memory Compaction A major overhead in computation reuse arises from storing cached activations, or intermediate outputs computed for reference frames, which subsequent frames rely on. Layer-wise memory compaction. By exploiting our layer-wise scheduling and the frame-referencing structure of our model (with P-frames acting as boundaries), we compact memory usage at each layer within a segment. Frames other than P-frames do not affect future segments, so their intermediate activations can be discarded once their current segment is processed. Figure 8 illustrates the key difference between the conventional frame-wise approach. As illustrated in Figure 8(a), in the conventional frame-wise approach, each frame runs through all layers sequentially. Intermediate outputs must remain in memory until all computations for that frame are complete. In contrast, in our layer-wise scheduling (Figure 8(b)), the same layer is applied to all frames in the batch, and once a layer is complete for each frame, any unneeded activations are immediately freed. This staggered approach substantially lowers memory overhead, enables larger batch sizes with high GPU utilization, and facilitates deployment of large-scale VideoLMs under limited memory budgets. # 5.3 Sparse Computation Compaction ReuseViT’s dynamic and often sparse computation patterns pose difficulties for GPUs, which are optimized for dense, regular workloads. To address this, we use stream compaction [7], gathering active (i.e., non-reused) tokens into contiguous memory regions so sparse computations can be converted into dense forms better suited to GPU acceleration. Layer-wise stream compaction. We implement GPU kernels for stream compaction across different frames within the same segment, as shown in Figure 9. By accumulating active tokens from multiple frames before moving on to the next feed forward network, we form well-shaped matrices amenable to GPU acceleration. Once the batched computation is complete, we scatter the results back into their original positions in each frame’s token sequence, preserving correctness for the next layer. This gather-scatter strategy ensures that even when some frames have only a few active tokens, they can still benefit from dense GPU operations. Figure 9: Sparse Computation Compaction. To minimize overhead further, we prioritize CUDA-based implementations for the gather-compute-scatter procedure, avoiding frequent CPU-GPU synchronization. Such an approach not only speeds up the compaction process itself but also reduces latency spikes that might occur from excessive kernel launches or data transfers between CPU and GPU. # 6 WORKFLOW OF DÉJÀ VU # 6.1 Overview of Query Processing On a query, Déjà Vu returns cached embeddings if available; otherwise it generates them with ReuseViT and stores the result. Embedding storage overhead. Storing frame-level VLP embeddings introduces minimal storage overhead. For example, extracting embeddings at 2 FPS from a Full-HD H.264 video $( \sim 6 2 5 \mathrm { K B } / s )$ results in approximately 4KB/s of data (assuming 1024-dimensional FP16 embeddings at ${ \sim } 2 \mathrm { K B }$ each). This constitutes merely $0 . 6 4 \%$ of the compressed video size, with further footprint reductions achievable via advanced floating-point compression techniques [57, 61, 73]. # 6.2 Offline Preparation Before handling real queries, Déjà Vu trains the decision and restoration modules for a given ViT backbone. During training, we use the self-supervised objective, which combines a similarity loss (to match the original ViT outputs) and a reuse-based loss (to encourage high reuse rates). The primary user-adjustable hyperparameter is the target reuse rate $( R _ { \mathrm { t a r g e t } } )$ , defined in Equation 15. Training with different values of $R _ { \mathrm { t a r g e t } }$ allows users to navigate various points on the accuracy-speed tradeoff curve. Alternatively, one can specify a target cosine similarity, whereby the model learns to maximize the reuse rate while conforming to the accuracy threshold defined by that similarity value. We also employ grouped-frame training to model how errors might accumulate. This improves robustness, as the model learns to balance computational savings with accuracy preservation across different video types. During training, Only the two lightweight modules are trained, while the pre-trained ViT backbone remains frozen and convergence typically occurs within an hour. Such design significantly simplifies deployment by removing the need to store, transfer, or manage multiple versions of large model weights. # 6.3 Online Inference Once deployed, Déjà Vu employs layerwise scheduling and memory compaction to translate the FLOP savings from ReuseViT into practical GPU throughput. While ReuseViT reduces the token-level computational load, effective scheduling and compaction routines ensure these savings materialize in actual GPU execution time which is a key requirement for large-scale query systems. To maximize GPU utilization, the system processes multiple videos in a single batch. If only one long video is available, it is split into multiple, uniformly sized segments. For each segment, four consecutive frames are collected via a small queue and fed into ReuseViT, enabling each inference call to process four times the usual batch size in frames. Batching multiple segments is crucial for efficiently utilizing GPU resources, especially when high reuse rates substantially reduce computation per video. Frame reordering. Frame reordering is handled within the model’s forward pass, which reconstructs results in the correct sequence. Consequently, the outer framework does not need to manage any frame shuffling or realignment. Although this staggered approach works well for offline analytics, where latency is less critical, it can also be adapted to streaming scenarios. In a streaming setting, Déjà Vu buffers four frames before computation, introducing a queuing latency of about two seconds at 2 FPS. For real-time applications with strict latency requirements, frame reordering can be disabled to minimize latency, albeit at the cost of lower reuse rates. Breaking error propagation. Although grouping frames largely confines errors, prolonged reuse can still amplify minor inaccuracies. To counter this, Déjà Vu periodically inserts a fully recomputed I frame. For instance, if the system resets every twentieth frame, it prevents indefinite error buildup without incurring excessive recomputation costs. This method keeps the overhead below $5 \%$ while maintaining accuracy over long sequences. By blending standard ViT-based embedding generation with lightweight, learned modules for inter-frame reuse, Déjà $\mathsf { v } _ { \mathsf { u } }$ delivers on-demand embeddings for a variety of queries. Its approach avoids repeated backbone retraining, recycles partial results across neighboring frames, and capitalizes on compaction techniques to maximize GPU efficiency. These design choices not only make Déjà $\mathsf { v } _ { \mathsf { u } }$ robust to queries demanding high accuracy but also ensure practical deployment for both offline and streaming scenarios. As a result, $\mathsf { D e j a V u }$ opens the door to scalable, open-ended videolanguage applications without the prohibitive compute costs usually associated with large-scale video analytics. # 7 EVALUATION # 7.1 Methodology End tasks. To evaluate the effectiveness of Déjà Vu, we select three distinct Video-Language Models (VideoLMs), each targetting a different task. All these models use embeddings extracted from CLIP [77] (ViT) as their visual backbone. We focus on: • Video retrieval. We use CLIP4Clip [64] for video retrieval and evaluate on MSR-VTT [106] dataset, which consists of 10,000 video clips annotated with textual descriptions, typically 10 to 30 seconds long. We report top-5 recall as the accuracy metric. ★ No Reuse DiffRate CMC II Eventful II Déja Vu Video Retrieval Video Question Answering Video Question Grounding 广 Better 自 92% 9815% 96% I 91% 929%59%2% G5 87% 82% 74% 7627% 8261% 76%68%85%68% 79% 74% 64% 54% 39 %66 5%1% 1:0 26% 25% 24% 20% ★ 39% 14% 1:0 42%13% ★ 59 63 67 75 80 85 9 12 15 Recall (%) Accuracy (%) Grounding Accuracy (%) (a) (b) (c) 79 国 92% 85%80% 96% 1 91% 95%92% Throughputs ? 87% 64%5482% 84% 67% 61% 85% 76% 85% 53%68% % 74% 51% 63 67 80 85 12 15 Recall (%) Accuracy (%) Grounding Accuracy (%) (d) (e) (f) Video question answering. We use FrozenBiLM [111] for video question answering (or video QA), a task where the goal is to answer queries based on video content. We evaluate on How2QA [55], which standardizes input to 60 second clips drawn from longer instructional videos with 44,007 question-answer pairs, and report multiple-choice accuracy as the metric. Video Question Grounding. Video question grounding (or video grounded QA) requires the model to predict the relevant video segment along its answer. We use TempCLIP [104] and evaluate it on NExT-GQA [104], with longer, untrimmed videos averaging around 45 seconds, with short answer segments requiring fine-grained temporal grounding. We report GQA@Accuracy, the percentage of correct answers where the predicted video segment overlaps the ground-truth segment. To ensure a fair comparison, all baselines sample frames at 2 frames per second, which aligns with common practice in recent videolanguage models [3, 10, 64, 74, 99, 104, 111, 117]. Throughout all tasks, we use the standard train, val, and test splits provided by each dataset and adhere to the recommended evaluation protocols. Baselines. As no prior methods directly accelerate VideoLMs, our baselines include temporal ViT acceleration approaches (conceptually closest to Déjà Vu’s strategy) and a representative image-based method for contrast. We evaluate three acceleration methods: • CMC. CMC [86] uses a fixed threshold value on the mean squared error (MSE) to identify unchanged tokens suitable for reuse across frames. It leverages similarity computations provided by specialized video codec hardware and was originally evaluated on action recognition tasks. • Eventful Transformer. Eventful Transformer [26] applies a static, layer-wise policy that fixes the number of tokens to recompute, regardless of changes in video content. While simple and efficient, this rigid strategy might not adapt effectively to varying temporal content across frames, potentially causing error accumulation in dynamic scenes. • DiffRate. DiffRate [14] is a representative baseline for images, combining token pruning and merging within an image. It automatically determines the tradeoff between accuracy and efficiency through backpropagation. Though originally evaluated on image classification, we adapted the policy for VLP models. CMC and Eventful Transformer propose kernels to reuse computations in the attention layer. In our experiments, these kernels did not yield speedups because the attention layers have relatively small computational costs. Therefore, for both methods, we focus on their core reuse strategies without attention-layer optimizations. Implementation details. We run all experiments on a server with two 16-core Intel Xeon Gold 6226R CPUs (2.9 GHz), 192 GB of RAM, and an NVIDIA RTX 3090 GPU (24 GB of GDDR6 memory). The software environment includes Ubuntu 24.04 as the operating system, CUDA 12.1, cuDNN 8.9.0, and PyTorch 2.1.0. # 7.2 Tradeoff between FLOPs and Accuracy Figure 10 (a)–(c) shows tradeoffs between FLOPs reduction for embedding generation (y-axis) and accuracy $\mathbf { \dot { x } }$ -axis) for three tasks: video retrieval, video QA, and video question grounding. The x-axis shows the accuracy metric, and the star-shaped marker near the bottom right of each graph indicates the unapproximated model’s accuracy. The y-axis shows the achieved FLOPs reduction for embedding generation. A point higher on the y-axis indicates greater FLOPs reduction at the same accuracy level, while a point further to the right indicates better accuracy. Thus, methods near the topright corner represent more favorable efficiency-accuracy tradeoffs. Next to each marker, we show the computation reuse rate, except for DiffRate, which applies token pruning and merging instead. Notably, techniques that exploit temporal redundancy (CMC, Eventful Transformer, and Déjà Vu) consistently achieve higher FLOPs reduction than the image-focused DiffRate, whose pruning/merging targets only spatial redundancy within individual frames. Among these temporal-based baselines, Déjà Vu attains the best tradeoff, typically matching or exceeding each method’s accuracy at a lower computational cost. # 7.3 Tradeoff between Throughput and Accuracy Figures 10 (d)-(f) show throughput improvement for the embedding generation on the y-axis. While Déjà Vu and DiffRate are measured on a real GPU system, CMC and Eventful Transformer lack GPUoptimized implementations, so we interpolate their throughput assuming our compaction techniques applied. Here, we normalize throughput to that of the original model, and only configurations that yield an actual improvement are shown, and each marker can be matched to those in the FLOPs plots via the reuse rates. Consistent with previous $\mathrm { F L O P s }$ -based results, Déjà $\mathsf { v } _ { \mathsf { u } }$ achieves the highest throughput-accuracy tradeoff. For instance, Déjà $\mathsf { v } _ { \mathsf { u } }$ reaches speedups of up to $1 . 8 1 \times$ for video retrieval, $2 . 6 4 \times$ for video question answering, and $2 . 5 4 \times$ for video question grounding. By contrast, Eventful Transformer, the next-best baseline, achieves speedups of $1 . 3 2 \times ,$ , $2 . 0 8 \times$ and $2 . 2 0 \times$ , respectively. While baselines exploiting inter-frame redundancies remain competitive for VideoLM tasks, Déjà Vu’s tailored optimizations achieve superior throughput-accuracy tradeoffs. Even incremental improvements are crucial, for instance, a $1 - 2 \%$ accuracy gain in leading video QA benchmarks typically requires several months of intensive research [110, 111], and modest speedups can significantly reduce costs and query latency in large-scale deployments. Interestingly, DiffRate configurations become more competitive for throughput, as the high complexity of computation reuse can hinder its realization. However, Déjà Vu’s careful optimizations still ensure higher overall speedups. # 7.4 Breakdown Analysis At the same reuse rate, Déjà $\mathsf { v } _ { \mathsf { u } }$ has slightly lower performance than CMC or Eventful Transformer. For instance, in Figure 10 (e) when reuse is at $6 1 \%$ , Déjà Vu shows a marginally lower throughput. Figure 11 (a) shows a FLOPs breakdown of these methods at an identical reuse rate on the video retrieval task. In this scenario, the reuse rate is intentionally fixed across all methods to isolate and highlight algorithmic overhead. The main additional overhead in ReuseViT arises from the decision layers and restoration layers, which add about $4 \%$ to normalized FLOPs. However, Figure 11 (b) shows that at the same accuracy on video question answering, Déjà $\mathsf { v } _ { \mathsf { u } }$ achieves a significantly higher reuse rate. Thus, at equivalent accuracy levels, Déjà $\mathsf { v } _ { \mathsf { u } }$ effectively compensates for the additional overhead by adaptively calibrating reused computations, enabling it to reuse more tokens without sacrificing accuracy. Hence, while Déjà $\mathsf { v } _ { \mathsf { u } }$ incurs some overhead at the same reuse rate, its adaptive mechanisms enable higher reuse where accuracy must be maintained, which ultimately yields stronger endto-end speedups when matching target accuracy levels. Figure 11: Breakdown of FLOPs on video retrieval. Figure 12: Comparison of peak memory usage with and without cached memory compaction. The dotted line shows memory capacity of RTX 3090 GPU. # 7.5 Memory Overhead Figure 12(a) compares the peak GPU memory usage of ReuseViT with and without cached memory compaction, across different batch sizes. The blue bars show memory usage when intermediate computations are cached without compaction, while the green bars include compaction. Without compaction, the model processes only 67 batches before running out of memory, but compaction raises this limit to 183, improving GPU utilization and throughput. By keeping memory usage within GPU limits, practitioners can deploy larger batch sizes or process multiple video streams concurrently, which in turn boosts overall system throughput. This is particularly valuable in server-based or cloud environments where memory is a critical and often expensive resource. Figure 12(b) illustrates how the cache size changes during inference. Without compaction, the cache grows linearly until all layers are processed. With compaction, cached memory for processed frames is freed after certain layers, creating a sawtooth pattern and significantly reducing peak memory usage. Figure 13: Ablation analysis compaction techniques. Figure 14: Fluctuation of computation reuse rate and cosine similarity over time. Figure 15: Ablation study of ReuseViT’s design choices on the NeXT-GQA dataset. The full ReuseViT configuration is denoted by a triangular marker. # 7.6 Ablation Study for Inference Speedup We conduct an ablation study of inference speedup with a ReuseViT configuration achieving a $6 1 \%$ reuse rate on the video question answering task to show how each technique contributes to $\mathrm { \Delta D \acute { e } j \dot { a } }$ $\mathsf { v } _ { \mathsf { u } }$ ’s performance. Figure 13 highlights the throughput gains from each step. The first bar indicates reusing computation with hard gating alone yields a $1 . 2 5 \times$ speedup. Layer-wise sparse computation compaction then increases the speedup to $1 . 4 5 \times$ by shaping operands to better utilize the GPU kernel. Finally, layer-wise memory compaction enables larger batches, improving parallelization and raising the total speedup to $1 . 6 2 \times$ . These results confirm $\mathsf { D e j a }$ Vu’s optimizations translate FLOPs savings into real throughput gains. Each component offers complementary benefits, gating reduces compute, sparse-compaction improves GPU workload efficiency, and memory compaction allows larger batches, unlocking significantly more performance together than individually. # 7.7 Adaptability to Video Content We compare the adaptability of Eventful Transformer and Déjà Vu on a video segments from How2QA, with both methods reaching $8 4 . 5 \%$ accuracy. Figure 14 depicts the fluctuations in computation reuse rates (top) and cosine similarities (bottom) over time. Initially, both methods fully compute the first frame, then adopt their reuse strategies. Eventful Transformer maintains a fixed reuse rate, leading to a more pronounced decline in cosine similarity. In contrast, Déjà Vu adapts when error begins to accumulate, lowering the reuse rate to protect accuracy and raising it when frames are Eventful D+R+Grouping Sequential 3 Segment Decision D+R+G+Restoration 1 Segment (ReuseViT) D+Reordering (ReuseViT) 2 Segment 4 Segment Better 2.5×- 2.5× 2.0x 2.0x 1.5x 1.5x 1.0× 1.0x 0.800.84 0.87 0.91 0.94 0.80 0.84 0.87 0.91 0.94 Cosine Similarity CosineSimilarity (a) Component Ablation (b) Frame Grouping Pattern more similar. Moreover, Déjà Vu learns to adjust reuse behavior based on reference frame types (I- or P-frames), further mitigating error accumulation. By dynamically balancing reuse and recomputation, Déjà $\mathsf { v } _ { \mathsf { u } }$ preserves higher feature similarity over time and achieves a superior tradeoff between computational efficiency and model accuracy compared to methods with fixed reuse strategies. # 7.8 Ablation Study for Design Choices To evaluate the contributions of ReuseViT’s core components, we perform an ablation study on the NeXT-GQA dataset. Figure 15 assesses visual embedding quality, measured by cosine similarity to the original embedding without reuse, and throughput improvement normalized against the original ViT. Higher values on both axes indicate better efficiency and accuracy. Figure 15(a) presents incremental component comparisons. Starting from a configuration with only the adaptive decision layer, we sequentially incorporate frame reordering, grouped frame training, and finally, the restoration layer. Notably, even the initial decision layer alone already provides substantial improvements over the static Eventful Transformer baseline, highlighting the significant advantage of adaptive reuse decisions. Each subsequent addition further enhances the tradeoff between efficiency and accuracy. Although the restoration layer introduces a slight computational overhead reducing peak throughput, it notably improves embedding quality at higher reuse rates. Figure 15(b) examines various frame grouping strategies. Compared to a sequential processing baseline, all reordered groupings achieve improved tradeoffs. Performance consistently increases with segment lengths up to three segments, while extending to four segments yields slightly inferior results. Thus, a three-segment grouping strategy effectively balances computational efficiency and visual embedding quality. # 8 LIMITATIONS AND FUTURE WORK While Déjà Vu largely improves computational efficiency for videolanguage tasks, several limitations remain, motivating future work. Attention Layer Reuse. Déjà Vu currently targets only FFN and QKV layers. These layers dominate computational costs at standard resolutions, for instance, 257 tokens for ViT-L/14. However, as resolution and token counts rise—such as 577 tokens (CLIP ViT$\mathrm { L } / 1 4 @ 3 3 6 \mathrm { p x } ,$ ), 785 tokens (DINO ViT-B/8) [13], and 1370 tokens (DINOv2 ViT-G/14@518px) [68]—attention layers form an increasingly substantial fraction of total FLOPs, reaching up to $2 3 . 5 \%$ Addressing this will necessitate exploring attention reuse strategies or leveraging sparse attention kernels optimized for GPUs, like those proposed in Eventful Transformer. Another approach involves specialized hardware accelerators, such as those introduced in CMC. Broader task generalization. This work evaluates Déjà Vu specifically on retrieval, question-answering, and grounding tasks. Adapting our learned reuse strategies to other video tasks, such as action recognition or object detection, would involve adjustments to taskspecific model architectures and setups. While this adaptation is conceptually straightforward, its practical exploration remains an open and promising direction for future research. Adaptive periodic refresh and online adaptation. Another limitation relates to the adaptive periodic refresh strategy within Déjà $\mathsf { v } _ { \mathsf { u } }$ . This strategy could benefit from additional refinement for improved handling of long-form video inputs and enhanced robustness across various video domains. Furthermore, future research might explore online fine-tuning methods designed to dynamically adjust reuse policies during inference. Such adjustments would enhance adaptability and efficiency within evolving real-world scenarios. # 9 RELATED WORK CNN-based pipelines for known classes. Early systems for known object categories often relied on trained CNNs or proxy models. For instance, NoScope [42] developed lightweight cascades for static-camera queries. Focus [35] and BlazeIt [41] built approximate indexes that were refined by heavier models. Other approaches included Tahoma’s [2] use of cascades with input transformations and MIRIS [5] targeted multi-object tracking. Some systems precomputed object tracks, like OTIF [6], or used detector ensembles, as seen in FiGO [11]. More recent contributions include DoveDB [105] unifying training and tracklet ingestion, Seiden [4] leveraging highaccuracy models with sampling to avoid proxies, and InQuest [81] paring cheap proxies with oracles for streaming analytics. While cost-effective for predefined categories, these systems struggle with zero-shot or open-ended queries without retraining. Resource management and domain-specific analytics. Another line of work focuses on scheduling or adapting analytics pipelines at scale. Scanner [75], for example, exploits domain metadata for raw video decoding and dataflow. VideoStorm [116] and Chameleon [38] schedule model cascades for concurrent streams. ODIN [89] monitors domain drift to retrain models for known classes under new conditions. Further techniques involve adaptive query reordering by Hydro [40], using geospatial metadata to skip frames as in Spatialyze [46], or integrating caching to accelerate iterative queries, demonstrated by VIVA [44]. Open-vocabulary indexing and domain-agnostic exploration. To address fixed label sets, several systems enable querying arbitrary classes without retraining. Panorama [119] learns a generic feature space. TASTI [43] reuses a learned semantic index for new labels, and Boggart [1] provides a model-agnostic index. Other systems support interactive exploration, such as EVA [109]. VOCALExplore [22] trains detectors on-the-fly via active labeling, while SeeSaw [66] leverages CLIP embeddings for zero-shot retrieval. LVS [49] approximates embeddings from cached ones, and SketchQL [101] uses sketch-based queries. Domain-agnostic storage solutions like VStore [107], TASM [21], and VSS [33], along with decoding-stage acceleration in CoVA [36] and TVM [123], also enhance query speed. Modified architectures and acceleration for ViTs. Following ViT’s [25] success, many subsequent architectures aim to reduce computation using techniques such as local attention or convolutions [18, 30, 34, 63, 70, 96, 97]. For video processing, temporal mixing layers are common additions [16, 54, 79, 91, 95, 100, 112], though no standard has emerged. Standard ViTs are accelerated via token pruning (removing less important tokens) [12, 14, 27, 47, 62, 78] or token merging (combining similar tokens) [8, 12, 14, 17, 58, 65]. Pruning techniques can be fixed-rate [78] or adaptive [27], sometimes combined with merging strategies [47, 62]. Merging approaches, exemplified by ToMe [8] and TCFormer [65], downsample tokens, with recent extensions to temporal attention [17, 58]. Some methods integrate both pruning and merging [12, 14] or propose specialized hardware [24, 60, 76, 94, 113]. Computation reuse and decision making. Reusing computations across frames offers gains beyond intra-frame redundancy. Eventful Transformer [26] and CMC [86] use manual cross-frame reuse decisions. In contrast, Déjà Vu employs a fully learnable mechanism for improved efficiency. Prior work on temporal redundancy in CNNs includes incremental activation updates [9, 69], use of motion vectors [87], delta updates [71, 72], or discarding redundant frames and regions [20, 59, 90, 93, 103]. Several studies also learn reuse decisions end-to-end. These include methods for dynamic fusion [121] or layer skipping [15, 98, 102]. While these predominantly target CNNs, Déjà Vu adapts this concept to VideoLMs by integrating a learnable reuse mechanism in ViTs for significant speedups while preserving accuracy.
Recently, Video-Language Models (VideoLMs) have demonstrated remarkable capabilities, offering significant potential for flexible and powerful video query systems. These models typically rely on Vision Transformers (ViTs), which process video frames individually to extract visual embeddings. However, generating embeddings for large-scale videos requires ViT inferencing across numerous frames, posing a major hurdle to real-world deployment and necessitating solutions for integration into scalable video data management systems. This paper introduces D\'ej\`a Vu, a video-language query engine that accelerates ViT-based VideoLMs by reusing computations across consecutive frames. At its core is ReuseViT, a modified ViT model specifically designed for VideoLM tasks, which learns to detect inter-frame reuse opportunities, striking an effective balance between accuracy and reuse. Although ReuseViT significantly reduces computation, these savings do not directly translate into performance gains on GPUs. To overcome this, D\'ej\`a Vu integrates memory-compute joint compaction techniques that convert the FLOP savings into tangible performance gains. Evaluations on three VideoLM tasks show that D\'ej\`a Vu accelerates embedding generation by up to a 2.64x within a 2% error bound, dramatically enhancing the practicality of VideoLMs for large-scale video analytics.
[ "cs.DC", "cs.CV" ]
# I. INTRODUCTION Mosquito-borne diseases continue to be a leading cause of illness and death worldwide, particularly affecting lowand middle-income countries. According to the World Health Organization (WHO), approximately 700 million people are affected by mosquito-borne illnesses every year, resulting in over one million deaths [1]. Diseases such as Malaria, Dengue Fever, Zika Virus, and Chikungunya are transmitted by mosquito vectors that breed primarily in stagnant water. The economic burden associated with these diseases is also substantial; for instance, malaria alone is estimated to cost African economies over $\$ 12$ billion annually due to healthcare expenses and lost productivity [2]. Despite ongoing efforts to combat these diseases, the incidence of dengue has increased 30-fold in the past 50 years, with nearly 390 million cases reported annually [3]. The rapid pace of urbanization and poor water management in many regions further exacerbates the problem by creating ideal conditions for mosquito breeding. Traditional mosquito control methods, including manual inspection and elimination of breeding sites, are labor-intensive, time-consuming, and often infeasible in large or inaccessible areas. These challenges underscore the urgent need for scalable, technology-driven approaches to detect and manage mosquito habitats more efficiently. Although artificial intelligence and computer vision have made significant strides, the task of accurately detecting mosquito breeding places and analyzing water surfaces remains challenging. Existing models often underperform due to a lack of specialized datasets capable of handling both detection and segmentation tasks. Moreover, most datasets are unimodal, lacking contextual explanations that could aid in human interpretation or decision-making [4]. This absence of interpretability limits the utility of AI models in realworld public health scenarios, where both performance and explainability are critical. To address these gaps, we introduce VisText-Mosquito, a multimodal dataset specifically designed to support three interrelated tasks: (i) detection of mosquito breeding sites, (ii) segmentation of water surfaces within these sites, and (iii) generation of natural language reasoning that describes the visual content and justifies AI predictions. The dataset comprises three main parts: Breeding Place Detection: Includes 1,828 images with 3,752 annotations across five classes: coconut_exocarp, vase, tire, drain_inlet, and bottle. Water Surface Segmentation: Contains 142 images with 253 annotations for two classes: vase_with_water and tire_with_water. Textual Reasoning: Each image is also paired with a human-authored natural language explanation, enabling models to learn visual-linguistic associations and produce interpretable reasoning. Image Data Image Data Data Collection Textual Data Annotation Prepreprocessing Models Possible Breeding Place YOLOv5s JBEC I Augmentation YOLOv8n i Generate Y YOLOv9s 1 Reasoning AnnRotbatoifolnowTool EC 1 Bottle Vase 1 酒 O 1 Question Drain- Inlet Coconut-Exocarp Validation Human Feature Extraction YOLOv8x-Seg SEG Doesmtohseqiumitaogber esehdoiwngaspitoet?ential Vase with Tire with 雪 Training Data YOLOv11n-Seg ANTI The image conRteaianYsseotsniriensgsubmerged in 1 Vali2d0 %Data BLIP rasionuwrcate sr,uiptraobvliedifnogr amostsaqgunitaonltarwvater development. Therefore, the presence of ? Test Data v water-filled tires indicates a potential Clean Data DataSet 10% mosquito breeding site. This paper is structured as follows: Section II reviews existing related works. Section III describes the methodology employed in our study. Section IV presents the result analysis, discussing the performance of our models across all modalities. Finally, Section V concludes the paper with a summary of contributions and future research directions. # II. EXISTING WORKS Mosquito-borne diseases remain a significant global health threat, necessitating innovative AI-driven approaches for early detection and control. Machine learning models and highquality datasets play a pivotal role in identifying mosquito breeding sites, enabling proactive interventions. Several datasets facilitate this research, including MosquitoFusion [5], M. Mehra et al. [6], and Chathura et al. [7], which capture diverse breeding environments and enhance model adaptability. Object detection models like YOLOv8 have demonstrated effectiveness, achieving an $\operatorname* { m A P @ 5 0 }$ of $5 7 . 1 \%$ on the MosquitoFusion dataset. CNN-based segmentation models have also been used to identify stagnant water surfaces in drone and satellite imagery with high precision [8]. Transformer-based approaches such as DETR and SAM further improve feature representation and segmentation performance in mosquito habitat analysis tasks [9]. Meanwhile, geospatial AI techniques leveraging satellite and UAV imagery have been successfully applied to GIS-based mosquito risk mapping, enabling detection and classification of urban breeding hotspots with spatial accuracy [10]. Additionally, multimodal fusion approaches that incorporate weather, topographical, and epidemiological data have enhanced vector prediction models for disease outbreak mapping and control [11]. Despite these advancements, notable gaps remain. Many existing datasets are limited to controlled environments and focus on single-class detection, which reduces their generalizability to complex, real-world scenarios [12]. Most critically, and to the best of our knowledge, no prior work has introduced a multimodal dataset that integrates both visual (image-based detection and segmentation) and textual (natural language reasoning) components for mosquito breeding site analysis. This represents a significant gap in the field, as combining vision and language could improve interpretability, model trust, and real-world applicability of AI-driven mosquito surveillance systems. # III. METHODOLOGY In this section, we discuss the detailed methodology of our proposed solution shown in Figure 1. # A. Data Collection The data collection process is designed to ensure diversity, accuracy, and real-world relevance in capturing mosquito breeding sites and water surfaces. High-quality images are collected from various regions across Bangladesh, covering diverse breeding habitats under both daylight (8 AM–5 PM) and nighttime conditions to enhance dataset variability. To improve model generalization, multiple images are taken from different angles and distances (1–3 meters), ensuring a detailed visual representation. Both natural and artificial breeding sites are documented, though challenges such as unpredictable weather and difficult terrain occasionally impacted data collection. Ethical considerations are prioritized by obtaining permission from local authorities and property owners. The process remains non-invasive, avoiding harm to natural habitats or disruptions to local communities. Anonymization techniques are applied to protect sensitive location details. The initial dataset comprises 1,828 images with 3,752 annotations for breeding place detection and 142 images with 253 annotations for water surface segmentation. This ensures a diverse, comprehensive foundation for training models on mosquito habitat detection and surface segmentation. Additionally, a text modality is included in the dataset to enable multimodal analysis. This dataset contains 3,762 instances, each associated with an image and annotated with three text fields: (a) Question: A binary question asking whether the image shows a mosquito breeding site. (b) Response: A ’Yes’ or ’No’ answer (3,748 “Yes” and 14 “No” responses). (c) Reasoning: A short free-text explanation justifying the response. The average length of the reasoning statements is approximately 36 words. This textual annotation provides semantic context and interpretability, significantly enhancing the dataset’s capacity for explainable AI. Fig. 2. Fully tagged and labeled images # B. Data Preprocessing The data preprocessing involves annotation, transformations, and augmentation to enhance the dataset. All images are manually annotated using the Roboflow [13] platform, ensuring precise labeling of mosquito breeding sites and water surfaces. Figure 2 shows examples of the annotated images. The following preprocessing steps are applied to each image: (a) Auto-Orient: Images are auto-oriented to correct any device orientation inconsistencies. (b) Resize: All images are resized to $6 4 0 \times 6 4 0$ pixels for uniform input shape. (c) Auto-Adjust Contrast: The contrast is automatically adjusted to enhance visual clarity. To improve model robustness, augmentation techniques are applied: (a) Flip: Horizontal flips doubled the dataset size by varying object orientations. (b) Rotation: Random rotations introduce alignment variations. (c) Brightness Adjustment: Image brightness is varied to simulate real-world lighting conditions. As a result of these augmentations, the total number of images in the dataset increases to 4,425 for the detection part and 331 for the segmentation part. This augmentation strategy significantly enhances the dataset’s variability, ensuring that the models trained on this dataset would be more robust and capable of generalizing well to unseen data. For the text modality, the binary responses (“Yes” or “No”) and the accompanying reasoning statements were initially generated using the Gemini-2.5-Flash model. To ensure high-quality annotations, all generated responses are subsequently curated and validated by human annotators. This semi-automated annotation workflow allows efficient dataset expansion while preserving semantic integrity and contextual accuracy. # C. Distribution Analysis and Folder Structure The breeding place detection subset of the dataset comprises a total of 1,828 images with 3,752 annotations distributed across five classes. The class-wise distribution indicates that the Coconut-Exocarp class has the highest number of instances with 923 annotations, followed closely by the Vase class with 911 annotations. The Tire class contains 780 annotations, while the Drain-Inlet and Bottle classes have 585 and 553 annotations, respectively. For the segmentation part of the dataset, there are 142 images with a total of 253 annotations across two classes: vase_with_water and tire_with_water. The vase_with_water class has a significantly higher number of annotations, with 181 instances, compared to the tire_with_water class, which contains 72 annotations. Table I summarizes the class-wise annotation distribution in our dataset. TABLE I ANNOTATION DISTRIBUTION IN VISTEXT-MOSQUITO DATASET In addition to visual data, the dataset contains textual annotations in the form of reasoning responses that describe the rationale behind each detection or segmentation. Analysis of the reasoning texts reveals that the average length is approximately 230 characters, with most entries ranging between 175 and 280 characters. The text lengths follow a roughly normal distribution, indicating consistency in the annotation style. Most frequently occurring terms in the reasoning responses include phrases such as “stagnant water,” “mosquitoes,” “coconut shell,” and “potential breeding site,” reflecting common descriptors and domain-specific language used during the annotation process. The organization of the dataset is designed to optimize data management and accessibility for both object detection and segmentation tasks. The dataset is divided into three main directories: Train, Valid, and Test. Each of these directories contains two sub-folders: images: This folder houses the visual data in the form of images collected from various mosquito breeding sites. labels: This folder contains the corresponding annotation files for each image. The annotations detail the positions and classes of objects or segments identified in the images, serving as a guide for training the machine learning models. This dual-folder structure is consistently maintained across the Train, Valid, and Test directories to streamline the dataset’s usability. Organizing the data in this manner facilitates the training and validation processes by clearly distinguishing between the images and their respective labels. In addition to the visual components, the dataset also includes a textual reasoning component, which provides natural language justifications for each image annotation. These reasoning texts are stored in a separate CSV file that contains a filename column. This column acts as a key to link each reasoning entry directly to its corresponding image file in the dataset. # D. Experimental Setup All the images in the dataset are manually reviewed to ensure that no individually identifiable information was included or embedded in the dataset. This careful review process is implemented to maintain privacy standards and ensure the dataset’s suitability for training deep learning models. For the breeding place detection part, object detection models are trained using pre-trained versions of the $\mathtt { Y O L O v 5 s }$ , $\mathtt { Y O L O v 8 n }$ , and YOLOv9s models. For the segmentation part, the $\mathtt { Y O L O v 8 x - S e g }$ and $\mathtt { Y O L O v 1 1 n - S e g }$ models are employed due to their advanced capabilities in pixel-level segmentation tasks, which are critical for accurately identifying water surfaces in potential mosquito breeding sites. For the textual reasoning component, we utilize the BLIP (Bootstrapped Language Image Pretraining) [14] model to generate natural language descriptions that explain the visual content of each annotated image. It is a vision-language model (VLM) that integrates image and text understanding and is pretrained on large-scale image-caption datasets using contrastive and generative objectives. In our method, we finetune the BLIP model on our curated set of reasoning texts aligned with the annotated images. During training, the model learned to associate specific visual patterns, such as objects like tires or vases containing water, with semantically rich textual descriptions that reflect potential mosquito breeding risks. The dataset is randomly split into three subsets: $70 \%$ training images, $20 \%$ validation images, and $10 \%$ test images. This split is done to ensure a comprehensive evaluation of the models’ performance, providing ample data for both training and validation while reserving a portion for unbiased testing. The training process is conducted on a Windows 11 (Version 23H2) machine equipped with the following hardware: Nvidia RTX 3070Ti GPU with 8GB of video memory and AMD Ryzen 5800X processor. The training is run for a total of 100 epochs with the input image size set to 640 pixels, and standard hyperparameters are utilized throughout the training sessions to ensure consistent and reproducible results. # IV. RESULT ANALYSIS In this section, we present a comprehensive performance analysis of the models used for three core tasks: object detection, water surface segmentation, and multimodal reasoning. Evaluation metrics include Precision, Recall, and Mean Average Precision at $50 \%$ Intersection over Union $( \mathrm { m A P } @ 5 0 )$ , commonly adopted for assessing object detection and segmentation, alongside BLEU, BERTScore, ROUGE-L, and final loss for text generation evaluation. # A. Object Detection Performance To detect potential mosquito breeding containers, we train and evaluate three object detection models, $\mathtt { Y O L O v 5 s }$ , YOLOv8n, and $\mathtt { Y O L O v 9 s }$ , on 1,828 annotated images across five object classes. The detailed performance is summarized in Table II. TABLE II OBJECT DETECTION MODEL PERFORMANCE The $\mathtt { Y O L O v 9 s }$ model achieves the highest precision and $\operatorname* { m A P @ 5 0 }$ , demonstrating its superior ability to accurately localize and classify breeding-related objects. In contrast, YOLOv5s offers the most balanced performance, maintaining high recall and demonstrating stable predictions across all classes, making it suitable for applications where minimizing false negatives is essential. $\mathtt { Y O L O v 8 n }$ performed slightly lower in all metrics, suggesting that the architectural advancements in YOLOv9s provide a meaningful edge in complex real-world imagery. # B. Water Surface Segmentation Performance The segmentation task is focused on identifying water surfaces in objects like vases and tires, critical indicators of mosquito breeding potential. We evaluate two advanced models, $\mathtt { Y O L O v 8 x - S e g }$ and YOLOv11n-Seg, on 142 images annotated for vase_with_water and tire_with_water. Table III presents their performance: TABLE III SEGMENTATION MODEL PERFORMANCE YOLOv11n-Seg consistently outperform $\mathtt { Y O L O v 8 x - S e g }$ across all three metrics. The improved recall indicates that YOLOv11n-Seg was more effective in correctly segmenting water surfaces without missing positive cases, which is essential in public health applications. The marginal gains in $\operatorname* { m A P @ 5 0 }$ suggest greater consistency in its pixel-level predictions, which is especially important in distinguishing water patches under occlusion, varying illumination, or cluttered backgrounds. # C. Multimodal Reasoning Performance In the textual reasoning task, we fine-tune the BLIP model to generate natural language justifications for each detection. This model is trained on image-reasoning pairs from the dataset to map visual patterns to semantically aligned textual outputs. After training, the model achieves a final loss of 0.0028, indicating successful convergence. The quality of generated reasoning is quantitatively evaluated using BLEU, BERTScore, and ROUGE-L metrics, as shown in Table IV. TABLE IV MULTIMODAL REASONING PERFORMANCE (BLIP MODEL) The high BERTScore (0.91) and ROUGE-L (0.87) indicate that the generated reasoning texts closely matched the semantic and structural properties of the ground truth. The BLEU score of 54.7 confirms strong n-gram overlap, which is valuable in capturing factual consistency. Qualitative reviews of sample outputs showed that BLIP could correctly contextualize key visual cues, such as stagnant water, container types, and environmental clutter, into meaningful, human-readable explanations, enhancing model transparency. The combined results from detection, segmentation, and reasoning tasks validate the robustness and applicability of the VisText-Mosquito dataset and the supporting models. The YOLOv9s and YOLOv11n-Seg models exhibited stateof-the-art performance in their respective tasks, while the BLIP model added interpretability by providing contextual descriptions aligned with public health goals. This multimodal pipeline offers a powerful toolset for scalable mosquito surveillance and control systems, bridging the gap between detection accuracy and decision-making transparency.
Mosquito-borne diseases pose a major global health risk, requiring early detection and proactive control of breeding sites to prevent outbreaks. In this paper, we present VisText-Mosquito, a multimodal dataset that integrates visual and textual data to support automated detection, segmentation, and reasoning for mosquito breeding site analysis. The dataset includes 1,828 annotated images for object detection, 142 images for water surface segmentation, and natural language reasoning texts linked to each image. The YOLOv9s model achieves the highest precision of 0.92926 and mAP@50 of 0.92891 for object detection, while YOLOv11n-Seg reaches a segmentation precision of 0.91587 and mAP@50 of 0.79795. For reasoning generation, our fine-tuned BLIP model achieves a final loss of 0.0028, with a BLEU score of 54.7, BERTScore of 0.91, and ROUGE-L of 0.87. This dataset and model framework emphasize the theme "Prevention is Better than Cure", showcasing how AI-based detection can proactively address mosquito-borne disease risks. The dataset and implementation code are publicly available at GitHub: https://github.com/adnanul-islam-jisun/VisText-Mosquito
[ "cs.CV", "cs.CL" ]
1 INTRODUCTION The penetration of software‑based systems has transformed the ways in which almost every indus‑ try operates. From controlling nuclear power stations to maneuvering spacecraft, complex software systems are used to interface with many critical systems. It is essential to ensure that these soft‑ ware systems are reliable and resilient. If these were to fail or get compromised, they would have a domino effect on subsequent systems. Supply chain attacks are an emerging threat targeting these systems. To quote an example of a popular widespread attack, the “SolarWinds hack” in late 2020 (Analytica, 2021) had led to a series of data breaches that affected tens of thousands of customers around the globe. Behind the screens, the cybercriminals had exploited the software package sup‑ ply chain to distribute Trojan versions of the software masqueraded as updates and patches. As an example of how this attack has resulted in consequent damage, the hackers who attacked a cyber‑ security firm (named FireEye) obtained unauthorized access to confidential tools that the company used for security auditing. The security flaw discovered in Apache Log4j (MITRE, 2021) is another notable vulnerability with a Common Vulnerability Scoring System (CVSS) score of 10 (the highest possible score) that had devastating consequences. The Log4j library is widely used in Java applica‑ tions and thus, the vulnerability impacted a very wide range of software and services. Such vulner‑ abilities leave organizations exposed and susceptible to attack. More recently, Crowdstrike reported a supply chain attack on March 29, 2023, involving the popular VoIP program 3CXDesktopApp (Kucherin et al., 2023). The infection spreads through tampered 3CXDesktopApp MSI installers, including a Trojan macOS version resulting in not just financial loss, but also loss of trust for the company (Madnick, 2023). Note that the package supply chain is not restricted only to the patches and updates. The distri‑ bution networks are involved during all stages of the software life cycle. Right from installing the tools required to set up the development environment, to pushing out newer versions of the packaged software product, different software supply chains are involved in all phases (Ohm et al., 2020). Figure 25.1 illustrates the entanglement and high involvement of software distribution supply chains when operating critical systems. This applies to various sectors like smart grids, manufacturing, healthcare, and finance. Modern infrastructure, from PLCs to data analytics, relies on multiple soft‑ ware systems and their supply chain dependencies. While Industry 4.0 has revolutionized processes and Industry 5.0 aims to merge cognitive computing with human intelligence, the cyber‑attack surface continues to expand (Culot et al., 2019). A software package refers to a reusable piece of software/code that can be obtained from a global registry and included in a developer’s programming environment. In fact, packages serve as reusable modules integrated with developers’ application code, abstracting implementation details and addressing common needs not supported by native applications, such as database connections. Most packages are available through Free and Open‑Source Software (FOSS) contributions, aid‑ ing in application development by reducing time and effort. Packages may have dependencies; for example, installing package X would automatically install its dependencies like package Y. Projects may contain hundreds or thousands of dependencies managed by package managers, including those developed by the developers or published by others. For example, in the JavaScript ecosys‑ tem, the two widely employed package managers are NPM and YARN (Vu et al., 2020). CLI tools resolve packages by name and version through communication with the corresponding registry. JavaScript’s popularity stems from its widespread use across the entire software and hardware stack, running on servers and mobile devices, which mutually sustains its language and package registries. As a matter of fact, in 2020, an article from the official NPM blog reported that more than 5 million developers use more than 1.3 million packages from the NPM registry, which itself caters up to 125 billion downloads every month. These statistics stand as a testimony to the popularity of package managers within developer communities. FIGURE 25.1 Involvement of software supply chains in critical systems. This work presents a comprehensive study of the security posture of existing package distribu‑ tion (PD) systems and uses this research as a base to propose an architecture that addresses the most critical security concerns arising out of this tight coupling of software package supply chains and the infrastructure that depends on them. This proposed architecture provides end‑to‑end integrity of the package supply chain to mitigate the cascading effects of a critical failure. While NPM or PyPI might not be a part of the toolchain that every software developer would use, this chapter would continue to quote these systems as an indicative example of the current state of package managers. Nevertheless, the architecture itself is platform‑agnostic and caters to the overall goal of securing the software package supply chains across all phases of a product’s life cycle and its usage in criti‑ cal systems. Further sections discuss topics including the survey of existing studies and threat landscape analysis. Based on this, a new architecture is presented along with a demarcation of various entities and the flow of information among them. The proposed architecture can be employed to secure the acquisition of software packages, while also being used to securely distribute updates to any software. Following this, a summary of different attack vectors and corresponding mitigation strategies are also analyzed. Finally, the potential impacts of this solution are discussed before concluding the chapter. # 25.2 RELATED WORK # 25.2.1 Studies on Package Distribution Frameworks With the advancement in web technologies and increased usage of web apps, there has been an exponential increase in the number of frameworks available for developers to choose from. The deployment of cloud native applications and orchestrated micro‑services has also fueled the fre‑ quency and magnitude at which these services are consumed. This section hopes to present an overall survey on the current software distribution mechanisms and then analyze them in the con‑ text of critical systems to understand the threat landscape. Catering to the potential needs of web practitioners, software engineering quality metrics have been used to evaluate each alternative. Factors like modularity, scalability, and reliability play a dominant role in the perception of a frame‑ work (Graziotin and Abrahamsson, 2013). An inundation of micro‑packages will result in a fragile ecosystem that becomes sensitive to any critical dependency changes. There can be a ripple effect down the dependency tree in case of any breakage (Librantz et al., 2020). Some packages perform trivial tasks, but others serve as interfaces to load foreign dependencies and third‑party modules, indicating that package complexity isn’t accurately defined by statistics like lines of code (LOC). Studies delve into statistics such as average package size, dependency chain size, and usage cost, emphasizing the importance of package stability and their impact on delivering end solutions (Kula et al., 2017). The Python development ecosystem is also highly mature and growing in popularity (Bommarito and Bommarito, 2019). The repository’s growth has been measured experimentally based on fac‑ tors like package versions, user releases, module size, and package imports. This highlights the significance of frameworks and the extensive library availability. Enhancing PD architecture can significantly impact the IT industry, emphasizing the need for a robust and secure package man‑ ager and distribution framework. The security of these PD frameworks has been a critical con‑ cern ever since the popularity of package registries began to increase (Achuthan et al., 2014). To address the security concerns over Software Dependency Management, there have been various attempts to leverage technologies ranging from virtualization to distributed architectures (D’mello and Gonzalez‑velez, 2019). Markus Zimmermann et al. (2019) have studied the security risks for NPM users and explored several mitigation strategies. The study was performed by analyzing dependencies among pack‑ ages, monitoring the maintainers responsible, and tracking publicly reported security vulnerabili‑ ties. There have also been similar attempts to devise vulnerability analysis frameworks by Ruturaj K. Vaidya et al. (2019). Once again, it is found that issues in individual packages can have a ripple effect across the ecosystem. The authors found that many projects unwittingly use vulnerable code due to lack of maintenance, even after vulnerabilities have been publicly announced for years. They compared the effectiveness of preventative techniques such as total first‑party security and trusted maintainers. When a package needs to be installed, there are a lot of tasks that happen under the hood. NPM not only downloads and extracts packages but also executes install hooks, which can include compiling sources and installing dependencies. While some tasks are essential, malicious tasks can also be run. There have been cases where post‑install scripts were used to distribute malware (Wyss et al., 2022). A major incident unfolded when malicious payloads infiltrated the widely used NPM package “event‑stream,” impacting millions of installations. This prompted package registries to prioritize security measures. In 2018, attackers exploited systems running Electron framework apps due to outdated chromium packages, despite known vulnerabilities. NPM issued an advisory addressing a vulnerability allowing reverse shells and arbitrary data access from malicious package installations (Baldwin, 2018). In November 2017, user “ruri12” uploaded three malicious packages – libpeshnx, libpesh, and libari – to official channels like RubyGems and PyPI, but their discovery didn’t happen until July 2019 (Robert Perica, 2019). This delay prompted calls for automated malware checks. Another recent incident involved two typo‑squatted Python libraries discovered stealing SSH and GPG keys (Cimpanu, 2019). Despite their removal, many developers had already incorporated them into their projects, illustrating the significant impact of such attacks on both independent devel‑ opers and companies reliant on open‑source frameworks and packages, causing distrust within the community. At this juncture, it is also worth pointing out that the 3CX attack (mentioned previously) was the result of another supply chain attack. A 3CX employee downloaded a tainted version of “X Trader” software in April 2022. The X Trader software was used by traders to view real‑time and historical markets and developed by another company, “Trading Technologies,” which discontinued the software in 2020. However, the software was still available for download from the company’s website which itself was compromised in February 2022 (Page, 2023). This incident further highlights the critical nature of supply chain attacks as the potential for cascad‑ ing is extremely high. NPM offers an API to enhance visibility into the software package supply chain, providing critical information about a package’s publication context. This includes metadata such as payload information, integrity hash, and Indicators of Compromise like IP addresses and file hashes. The newly introduced Security Insights API (Adam) exposes a GraphQL schema for accessing publica‑ tion information. Two‑factor authentication for the publishing account enhances security assess‑ ment, while publishing over the Tor network may raise suspicions of malicious behavior. Sandboxed execution and post‑install script analysis can further aid in flagging tasks with malicious intent (Murali et  al., 2020). For Python packages, open‑source projects such as Safety DB maintain a public record of known security vulnerabilities. Packages are reviewed by filtering change logs and Common Vulnerabilities and Exposures (CVEs) for flagged keywords. However, it is worth pointing out that the vulnerabilities are only fixed after it is publicly available and not checked prior to the public announcement (Alfadel et al., 2023). Platforms like Snyk Intel and Sonatype open‑source software (OSS) index aid developers in identifying and resolving open‑source vulnerabilities. The Update Framework (TUF) is a collabora‑ tive effort aimed at securing update delivery across software updaters, Library package managers, and System package managers. TUF, maintained by the Linux Foundation under the Cloud Native Computing Foundation (CNCF), safeguards compromised repository signing keys and is utilized in production systems by multiple organizations. Uptane and Upkit based on TUF guidelines have effectively secured updates for automotive and Internet of Things (IoT) devices. Despite their poten‑ tial for broader application, adoption rates remain low across industries. # 25.2.2 Current Security Landscape To securely store and distribute packages, having accurate information is crucial for risk assessment. Current security tools often identify vulnerabilities only after an extensive audit of the end product, neglecting details about the publishing pipeline. Understanding existing mitigation methods and event flow is key to designing an effective architecture. Compromised systems offer adversaries a range of techniques to cause harm. Infected applications can exploit remote services and steal credentials. Client software vulnerabilities may expose installed packages and sensitive metadata. Adversaries can establish persistent control through malicious droppers or by connecting infected machines to a Command‑and‑Control (C2) server, enabling sophisticated advanced persistent threat (APT) attacks. Attackers conduct supply chain attacks by injecting malicious code into open‑source projects, targeting downstream consumers for execution during installation or runtime. They can target any project type and condition code execution based on factors like lifecycle phase, application state, operating system, or downstream component properties (Ohm et  al., 2020). The attacks involve creating and promoting a distinct malicious package from scratch, entailing the development of a new open‑source software (OSS) project with the intention of spreading malicious code (Balliauw, 2021). Attackers use various tactics to target users on platforms like PyPI, npm, Docker Hub, or NuGet, including promoting projects to attract victims and creating name confusion by mimick‑ ing legitimate package names. These deceptive tactics aim to trick downstream users and may involve techniques like Combosquatting, Altering Word Order, Manipulating Word Separators, Typosquatting, Built‑In Package, Brandjacking, and Similarity Attack. Furthermore, attackers may subvert legitimate packages by compromising existing, trustworthy projects, injecting malicious code, taking over legitimate accounts, or tampering with version control systems to bypass project contribution workflows (Ladisa et al., 2023). By abusing legitimate development features, malicious components can elevate privileges and move laterally through the network. Techniques such as hiding the artifacts and disabling log‑ ging mechanisms can be used to evade defenses. Most PD frameworks also have provisions to create/modify system processes. This can be utilized to execute malicious daemons and exploit system‑level vulnerabilities. While one might argue that the mentioned attacks could also be per‑ formed independently, the key issue in PD frameworks (in their current form) is that they could be utilized as a trusted dropper by malicious players. Software companies are prime targets for APT actors, lacking a unified architecture to leverage knowledge from various sources for secure development. This lack can hinder traditional methods of studying adversary Tactics, Techniques, and Procedures (TTPs), enabling attack vectors to infect systems and industries using seemingly harmless software. Looking at the current security landscape from the perspective of critical systems, the effects are even more pronounced. Recent developments in the Internet of Things (IoT) and Cyber‑Physical Systems (CPS) have been revolutionizing industrial control systems (ICS) such as Supervisory Control and Data Acquisition (SCADA) networks. The integration of web and mobile applications with these systems exposes downstream systems to potential catastrophic failures due to their com‑ plex workflows and interlinked nature (Abou el Kalam, 2021). Despite hardware redundancy in most industrial deployments, software failures at key controllers could still lead to a single point of col‑ lapse. For instance, the remote manipulation of Safety Instrumented Systems (SIS) could result in severe consequences for dependent industrial facilities (Iaiani et al., 2021). State‑Sponsored actors often tend to engage in warfare by compromising these systems and disrupting essential services (Izycki and Vianna, 2021). Consequently, cyber‑attacks on critical infrastructure can even cost lives. Network‑based segmentation and protection are standard practices in industrial systems. However, once an adversary infiltrates a host connected to the internal network, the entire system (even if “air‑gapped”) becomes vulnerable. For instance, over‑the‑air (OTA) updates increasingly update firmware in these systems. Efforts to secure firmware updates, such as using blockchain networks, have been explored by researchers (Tsaur et  al., 2022). Nevertheless, concerns persist about the security implications of computationally aided nodes (Mukherjee et al., 2021). Various protocols, including system isolation, multi‑factor authentication, and integrity controls, meet secu‑ rity requirements. Governments mandate compliance policies, requiring training in best practices and conflict‑free involvement in these systems. Despite initiatives, inadequate scrutiny during package publication exposes a large vulnerable surface area. Users must remain vigilant regardless of project significance and seek enhanced pro‑ tection against outside interference (Tomas et al., 2019). To mitigate the risks imposed by the cur‑ rent situation, the chapter propounds the idea of a distributed and trusted code vetting process. This work thus proposes a unified and scalable architecture that includes all stakeholders to aid users in ensuring security throughout the development process. # 25.3 PROPOSED ARCHITECTURE Blockchains have been regarded as a disruptive innovation that can potentially revolutionize various sectors and applications. Going by standard definitions, a blockchain is a complex data structure recording transactional records securely, transparently, and decentralized. It’s a distributed ledger without a single controlling authority, open to anyone on the network. Once information is on a blockchain, it’s nearly impossible to modify due to cryptographic schemes and digital signatures. Participants can reach consensus without a third party, enabling record verification. These capabili‑ ties have proven useful to establish provenance and enable key supply chain management processes (Bandara et al., 2021). The proposed blockchain‑based architecture splits the stakeholders into four different discrete entities. Publishers are those who develop packages/modules and publish them on an online reposi‑ tory hosted on a VCS (Version Control System) like GitHub. Package Registries index them and make these packages available to the public. Entities responsible for ensuring the security and integ‑ rity of packages are termed Observers. This would include security advisories that audit the pack‑ ages and the CVE watchers who keep track of reported vulnerabilities. Finally, entities who would want to verify the security of the packages that they would be consuming are labeled as users. Depending on the context, users can be the developers who ought to download and use published packages for their projects, or users can refer to the systems deployed on a critical infrastructure that needs to verify the update packages that are being delivered to it. Figure 25.2 outlines the proposed architecture and details the interactions between the entities. In certain cases, the observers need not be external to the package registries, i.e., both these services could be provided by the same vendor. They just represent two different components. FIGURE 25.2 Interaction between entities in proposed architecture. Once a package has been developed and is ready for publishing, common tasks such as running tests, updating tags, and version numbers according to the Semver (Semantic Versioning) are done before pushing it to the Package Registry (Figure 25.2 Step 1). Until this step, none of the traditional methodologies needs to be modified. Once the package has been published, a copy of the package information is forwarded to all the observers in the observer pool (Figure 25.2 Step 2). They would then check if the details were authentic and if no known vulnerabilities exist. Common methods include verification of checksums and validations against VirusTotal. If the package is found to be harmless by an observer, the verification process is translated into a local block and prepared to be added to the blockchain network (Figure 25.2 Step 3). The digital asset can simply be represented as a collection of key‑value pairs in binary or JSON formats. Some of the metadata that could be used to denote vulnerabilities can include a Common Vulnerability Scoring System (CVSS) score, threat classification, affected systems, etc. Each observer accumulates their commits locally until they decide to create a block. The creation of a block would require an observer to digitally sign the proposed block using a multi‑party digital signature algorithm. In addition to their private key, this scheme requires a consortium of users to sign a single blob, addressing the concerns of both group and ring signatures. Each observer would need the validation of their work from at least another co‑observer which would be selected at ran‑ dom by the Package Registry (Figure 25.2 Step 4) who would then return the verified and signed block to the Package Registry (Figure 25.2 Step 5). Finally, the Package Registry would add this accepted block to the blockchain (Figure 25.2 Step 6). Note that adding blocks can only be per‑ formed by the Package Registry. Like how the genesis block is created in most DLTs (Distributed ledger technologies), it can be hard coded in this case also. Since the observers can be seen as competing entities, the constant challenging of the scanning report by co‑observers would result in a more accurate and accepted block. The block interval is also designed to be configurable to provide granular control over the system’s functioning. Once a block is added to the chain, the observers are notified by the Package Registry to update their local copies of the blockchain with this new block based on the publicly accessible blockchain (Figure 25.2 Step 7). This process of block confirmation serves as an acknowledgment to the nodes that a proposed transaction was successfully included in the chain. When multiple observers try to propose a block causing a race condition, the Package Registry is responsible for resolving this. The addition of blocks is done sequentially and the observer whose block wasn’t added is notified to propose a new one. The resultant signature is a part of the currently accepted record and the root’s final hash will be inclusive of the multi‑party signature. The “previ‑ ous hash” field of the next block would point to this newly computed hash and hence establish a link. This results in an immutable ledger that can securely record the verification process with federated trust management. The entire flow of data has been illustrated as a sequence diagram in Figure 25.3. Now, when a user must download a package and include it as part of their project, the details and security of the package can be verified against the information in the blockchain network (Figure 25.2 Steps 8 and 9). The security of this architecture is enforced because every root hash is being digitally signed by multiple observers. A user who might want to verify a package would have to use the public key of the corresponding observer to read a block. Consequently, the identity of the observers is at stake which validates that the block(chain) is free of any malicious entries. This serves as a Proof of Authority (PoA) consensus algorithm that leverages the value of identities and reputations (Honnavalli et al., 2020). Algorithm 25.1 outlines the verification procedure that would be followed by the user while attempting to check if a dependency is safe to be installed. # Algorithm 25.1: Verification of a Package Status INPUT: Identifier for a Package that needs to be verified OUTPUT: Returns ’true’ if the package is safe, else, the list of vulnerabilities FIGURE 25.3  Sequence diagram of the proposed architecture. is returned. chainValidity(): for block in chain do: Check if previousHash equals to the currentHash of the previous block; if chain is broken then: return false; End End return true; If chainValidity() $\scriptstyle = =$ true then: Find the latest block containing package information; Verify the signature on the Root Hash Retrieve the latest record corresponding to the concerned package and version if package is trusted by observers then Initiate periodic verification of package status; return true; End End return List of all Vulnerabilities; An observer would also have a numeric “rank” tagged to them. This rank would determine an observer’s reputation. Each time a block is verified by a co‑observer, the rank is incremented. Similarly, when the observer seems to increase false positives or true negatives while classifying threats, their rank would be downgraded. Combined with the PoA, the rank can be used to reward and penalize observers according to their participation in the network. When multiple observers seem to have different opinions on the security of a package, an observer might decline to sign the proposed block. In this case, the Package Registry would request yet another observer to validate the block. Thus, there needs to be a minimum of two entities having the same opinion by default. However, there could be a case where only one observer was sophisticated enough to detect a threat in a package. In this case, the observer’s rank can be used to determine if the block can be accepted or not. This methodology balances the occurrences of false positives and the diversity in the reporting, as each observer might report a distinct vulnerability that might have been missed by another scanner. When users are confused in choosing if a package can be trusted or not, voting ensembles can help them make informed decisions based on these insights. The blockchain system discussed in this solution is comparable to a permissioned ledger that is open for public view and replication. Only verified observers would be allowed to add blocks to the chain via the Package Registry. All other entities would be entitled to have read‑only access to this source of truth. Therefore, identity and access management controls can be effectively implemented. This brings along an array of advantages such as better scalability and faster transactions when compared to public blockchains (Ambili et al., 2017). The limited number of pre‑approved block validators enables an efficient platform capable of achieving higher transactions per second (TPS). They combine the concept of “permissioning” from private associations while embracing certain principles of decentralized governance. Since there is no mining involved, recording validations are efficient and free. Such a model presents the best of both worlds and optimally addresses security concerns while balancing availability. DLTs like Hyperledger Fabric and R3 Corda can be used to construct such networks (Sajana et al., 2018). The primary reason for choosing a blockchain over any database is the requirement of needing an append‑only ledger that can be read by anyone. Traditional databases do not block updates or deletes by design, which is undesirable in this use case. When analyzing architecture in the context of securing critical systems, speed and efficiency are key aspects that would have to be ensured. Contrary to how most permissioned voting consensus systems operate, public blockchains often resort to a technique called sharding to increase transac‑ tional throughput. Fundamentally, it involves horizontally spreading out storage and computational workloads to speed up processes. In such a scenario, it would suffice if a node maintained the data related to its partition, or shard alone. In the case of the architecture mentioned in this section, explicit engineering efforts to scale up the network would not be required. Most Hyperledger imple‑ mentations employing Byzantine fault‑tolerant (BFT) protocols have inherent abilities to perform at scale (Sousa et al., 2018). In a practical setting, there might be instances where the blockchain would have to “fork.” They might occur due to diverging copies of the chain being maintained separately, or simply because of a software update to the system. For all the observer entities who would participate as full nodes, the same version of the processing logic must be in sync. To ensure backward compatibility with the outdated nodes, the system would ensure that soft forks are used to create an unanimously agreed consensus algorithm. In the case of most public blockchain networks, a contentious hard fork is enforced when a significant fraction of full nodes contradicts their opinion on the software versions. However, since this proposed system is designed along the lines of a permissioned ledger, this can be avoided. Many critical systems tend to prioritize the stability of feature enhancements. Hence, software engineers writing code for such systems tend to lock the dependency versions. Being a blockchain that functions as an append‑only ledger, information for the older versions is always going to be retained. Even if an update must be made for a block that has been added to the chain, it can only be added to the old one. This way, the system can also serve as an audit trail that documents all changes that have been made across versions. With various phases in which supply chains and PD networks are involved, this architecture can be used in multiple stages of the product life cycle. Being focused on interoperability, the proposed architecture builds on top of the existing stack. For effec‑ tive implementation of the solution, the system doesn’t require the existing framework to be replaced entirely. All the governing rules can be programmed as smart contracts based on the DLT platform of choice. This would comprise the code that contains the set of rules enforced by the system. The blockchain‑based ledger can be implemented in addition to the existing system and populated asyn‑ chronously. Thus, the migration can happen gracefully and will not result in service downtime. With full API and webhooks support, users can extend their existing workflows to work with the proposed framework. Since the entire process will be handled asynchronously, there will be no reduction in the read or write throughput of the package managers. By having periodic checks per‑ formed on the source code as a part of the CI/CD (Continuous Integration and Continuous Delivery) pipeline, organizations can verify the integrity of their development life cycle at scale. From an organizational standpoint, using this solution would lead to an agile DevSecOps cycle by introduc‑ ing appropriate checks at critical stages of the software development process. Similarly, once an application is deployed, any further updates to the system can be considered a package published over an update server. In this case, the update server would be analogous to the Package Registry and all transactions can be mapped correspondingly. This way, the proposed solution can be inte‑ grated with critical systems and secure every interaction that involves pulling/pushing software. # 25.4 ANALYSIS AND OBSERVATIONS # 25.4.1 Security Assumptions The proposed architecture is based on the assumption that the verdict given by the observers will be accurate to the best of their knowledge. The system assumes that the Package Registry is trusted and will not act against its functioning. Furthermore, this system does not outline the compensatory model for recognizing the commercial value of the observers. Just like the current systems where vendors have a business model where they often provide basic security and scanning services at no cost, this architecture establishes a similar environment for them to provide services. Standard security protocols need to be in place across all layers of the network stack. All communications among entities will need to happen over secure communication channels using protocols like TLS and IPsec. The certificate revocation list (CRL) will have to be checked to ensure the validity of certificate authorities (CA) and the X.509 digital certificates issued by them. This is critical to prevent man‑in‑the‑middle attacks (MITM) and session hijacking. Attacks such as DNS (Domain Name System) cache poisoning can also be prevented by enforcing signature validation. Access control configurations need to adhere to the principle of least privilege (POLP). All server‑level vulnerabilities will need to be patched and updated to prevent possibilities of security compromise and breaches. The “Blockchain Security Framework” from the OWASP Foundation could serve as a general guideline for hardening various stages of development and establishing a security baseline. # 25.4.2 Protection Against Malicious Entities The process of threat modeling aids in effective risk management which is critical for compliance with certain regulations and certification bodies. Here, the chapter attempts to detail the attack scenarios that have been discussed earlier and present the potential mitigation provided by the pro‑ posed architecture. The MITRE ATT&CK knowledge base has been used as a foundation for the development of threat models specific to this use case. Scenario 1: Consider the scenario where a package has been published to a Package Registry along with an obfuscated malicious payload. These malicious commits often go unnoticed during reviewing pull requests to open‑source repositories. As per the proposed architecture, once the package has been published, observers receive a trigger to evaluate the security concerns over this newly published package. Publicly known threats can be easily detected in coordination with ser‑ vices like VirusTotal and watching CVE listings. Regardless of whether the presence of a threat is confirmed, the scan results are recorded in the block(chain). Both the observers and these services can utilize the determined result to further enhance their datasets on which anti‑malware engines are trained. The attestation of the observer is reinforced using the digital signature and the rank that is included as a part of the block’s contents. Now, the observer can initiate a take‑down request with the Package Registry. If any user had downloaded the malicious package, during this process, the user could securely verify the status of the package with the read‑only copy of the blockchain ledger using the digital signature of the observer(s). The same verification process applies to any other package that has this malicious package as one of its dependencies. If an attacker chooses to modify the status of a package stored on the ledger, they will have to recreate the Merkle tree of the block. However, in this case, the attacker will be unable to create a valid digital signature of the block, since he does not have access to the private key of any valid observers. Assuming that a forged signature is created and put in the ledger (assuming the Package Registry is compromised), the forgery would be detected by the observers as their local blockchains would alert the stakehold‑ ers. Even if the alert is ignored, the user will still be able to detect that the data has been tampered with by verifying the identity of the entity that has signed the block. The immutability of the ledger has thus been enforced in the proposed architecture. Scenario 2: Zero‑day vulnerabilities can be discovered for packages that are already powering production systems. Initially, the threat could have gone unnoticed while the observers scanned it. The requirement is to have systems aware that they have been using a compromised package. Two features in the proposed system accommodate this requirement. First, since the ledger can have multiple blocks added to the chain corresponding to a specific package and its version, the user would have to read the latest metadata to have up‑to‑date information on a package. Secondly, the automated periodic verification routine on the user’s end would be able to let the system know if any of the install packages have been comprised. If yes, the concerned stakeholders can be alerted to do the needful. Scenario 3: In adverse attacks, an observer itself could be compromised and act maliciously despite their identity being held at stake. This could result in the final verdict being inverted and intentionally increase the number of false positives and true negatives. In such a case, the multi‑party signature enforced by the architecture ensures that a single malicious observer cannot affect the system. Since each observer would need at least another randomly chosen entity (co‑observer) to acknowledge its scan results or have a high rank based on past reputation, it becomes hard for a malicious entity to masquerade as an observer. For the entire system to be compromised, multiple observer entities will have to be controlled to successfully execute the attack. Before such a situation occurs, this behavior can be easily traced by the participating entities and their access to the permis‑ sioned blockchain can be revoked. To further harden the system, the minimum number of required co‑observers can be increased at the discretion of the stakeholders. Nevertheless, this can serve as a self‑regulating framework whose functioning is dictated by its stakeholders. # 25.4.3 Advantages of the Proposed Architecture Compared to most PD frameworks available today, the proposed architecture combines the advan‑ tages of these frameworks, while ensuring that the security concerns are effectively addressed. Essential features such as vulnerability reporting and integrity verification have been hardened by utilizing a blockchain system. The key difference is in the philosophy of enforcing security and trust. While most systems like the NPM and PyPI offer a wide distribution of trust, the proposed architecture uses a narrow distribution of trust and encourages multi‑party consensus between enti‑ ties that might be mutually suspicious. Based on the business requirements of organizations, the proposed solution would be able to accommodate customizable security policies and access con‑ trols on top of the core architecture. Furthermore, when inspecting this architecture with regard to critical systems, the proposed solution can be loosely integrated with legacy systems and provides graceful degradation of services in case of failures on blockchain nodes. The “zero‑trust” approach ensures that every software artifact used can be verified independently. The distributed system also means that the users can offload the computational processing required at endpoints. On the tech‑ nology side, the proposed system is fully compatible with proprietary protocols and data formats, eliminating concerns about vendor lock‑in. Finally, for implementing and enforcing a security measure involving multiple entities, there is an inherent need to have some commonly shared responsibilities. To incentivize the adoption of this architecture, participating entities can leverage the advantages of sharing threat intelligence (Samtani et  al., 2020). All interactions happening on this system can be logged in a Security Information and Event Management (SIEM) and Security Orchestration Automated Response (SOAR) solutions for proactive monitoring and alerting. In certain cases, the collective information and statistical analysis derived from these sources can help organizations in patch management and prioritization strategies. This repository of information about the security of software packages can also serve as a source to aid Open‑Source Intelligence (OSINT) and Operations Security (OPSEC).
Software systems have grown as an indispensable commodity used across various industries, and almost all essential services depend on them for effective operation. The software is no longer an independent or stand-alone piece of code written by a developer but rather a collection of packages designed by multiple developers across the globe. Ensuring the reliability and resilience of these systems is crucial since emerging threats target software supply chains, as demonstrated by the widespread SolarWinds hack in late 2020. These supply chains extend beyond patches and updates, involving distribution networks throughout the software lifecycle. Industries like smart grids, manufacturing, healthcare, and finance rely on interconnected software systems and their dependencies for effective functioning. To secure software modules and add-ons, robust distribution architectures are essential. The proposed chapter enhances the existing delivery frameworks by including a permissioned ledger with Proof of Authority consensus and multi-party signatures. The proposed system aims to prevent attacks while permitting every stakeholder to verify the same. Critical systems can interface with the secure pipeline without disrupting existing functionalities, thus preventing the cascading effect of an attack at any point in the supply chain.
[ "cs.SE", "cs.CR" ]
# I. INTRODUCTION LLMs have become a key tool for automated software engineering. They are being used in a variety of contexts, with increasing autonomy [1]. Importantly, they are commonly used for code generation tasks, and their outputs are being integrated into software [2]. These decisions are being made with expectations of performance, informed by evaluation. It is more important than ever to be able to fairly evaluate LLMs’ use for code generation and problem-solving, and better understand the risks of the unintended behaviours they may exhibit, especially in the presence of dataset contamination. Numerous studies have evaluated the performance of LLMs on standard datasets - there are even leaderboards ranking the performance of various models on them [3], [4]. LLMs were also compared against humans, with studies suggesting better performance [5]. However, the validity of these rankings is doubtful because the behaviour of the models is not guaranteed to be genuine. Researchers have raised concerns about the possibility of results reflecting dataset contamination and models’ overfitting on training data [3]. These concerns challenge the validity and rigour of current evaluation approaches and their resulting conclusions. These limitations can, in turn, cause overestimation of LLMs’ capabilities, unsafe deployment in production environments, and an altered public perception of the technology that can threaten its general adoption. Write Python code to solve the following problem: hJFFm two a8Gh4E se5Zuz GIJe! and $\mathtt { m j < C q }$ of q&Xs m and n %FE)2Xt(G#Oy, t\$5 tDh the k\$rJQH of the two EPEˆ@W xGeX %E. The (gSFq<: run F(K@ DkN(;ss9r7 W $\mathtt { B i j > v }$ be O (log $( \mathfrak { m } \ + \ \mathfrak { n } )$ ). K@jb\$T $\mathbf { \Sigma } = \mathbf { \Sigma } = \mathbf { \Sigma } _ { \mathrm { ~ n ~ } }$ $0 < = \texttt { m } < =$ $\mathtt { w 0 0 0 } 0 \ 0 \ \le = \ \mathtt { n } \ < = \ \mathtt { w 0 0 0 } 1 \ < = \ \mathtt { m } + \mathtt { n } \ < = \ 1 0 0 0 \ - \ 1 0 6 \ < =$ GHnZ@ [i ], jk,e@ [i] $< = ~ 1 0 6$ Fig. 1: Example of an obfuscated task. This paper contributes to this discussion by studying the LLMs behaviour when solving problems under obfuscated inputs. We perform an exhaustive evaluation, adding varying levels of noise to common benchmark tasks, up to a point where they are no longer recognisable. These obfuscated tasks were supplied to LLMs, and their outputs were evaluated. Figure 1 shows an example of an obfuscated task - the text is unintelligible, and humans will not be able to understand the question or write a solution. However, we found that when this was asked of several LLMs, they were consistently able to solve this task, “correctly” identifying it as a problem of finding the median value in two sorted arrays, found in the LeetCode benchmark dataset. We show that this behaviour is more evident in tasks published before the models’ knowledge cutoff date, suggesting strong memorisation or overfitting to training data, rather than legitimate reasoning about the presented problem. We show that this limitation can be exploited, highlighting implications for safety. Based on the empirical results, we discuss the implications for benchmarking and evaluations of model behaviour, arguing for caution when designing experiments using standard datasets. We also propose measuring the decay of performance under obfuscation as a possible strategy for detecting dataset contamination. We empirically show that performance decay under extreme obfuscation is a practical indicator of dataset contamination and overfitting. • We provide a quantitative analysis of performance decay across two standard evaluation datasets (LeetCode, MATH), revealing stark differences between new and contaminated tasks. • We introduce the concept of eager pattern matching to describe the behaviour where LLMs solve tasks obfus cated beyond human recognition by exploiting spurious patterns, leading to incorrect solutions on new problems. • We propose a reproducible framework for generating obfuscated tasks and testing dataset contamination. We discuss the broader implications for LLM evaluation practices, deployment in automated software systems, and risks arising from eager pattern matching and contamination. Section II introduces the background of this study and Section III its methodology. Section IV presents the results, and Sections V and VI discuss our findings and their implications. # II. BACKGROUND AND RELATED WORK # A. Dataset contamination detection There exist benchmark datasets which have been extensively used in evaluating LLMs. They have become key resources, resulting in the construction of performance leaderboards [4], and gathering entire communities of researchers [6]. Important examples include the LeetCode [7] and MATH [8] datasets. However, due to the relative importance of these datasets, they have also been extensively used in model training, which unfortunately corrupts measurements of performance and other original aims of the datasets [9]. Significant efforts are being undertaken to combat these issues, most notably through temporal dataset splits [3] and dynamic dataset creation [10]. Methods of detecting LLM contamination have been proposed, such as token probability, completion overlap, and performance [11], [12]. Some of these methods have been shown to achieve high accuracies, but due to a lack of ground truth information and probabilistic LLM outputs, no method is perfectly reliable, and researchers are constantly looking to find new ways of testing contamination [12]. We contribute to this area of research by considering LLM behaviour and performance on obfuscated tasks as a method for detecting dataset contaminaton. # B. LLM resilience to noise There have been multiple previous studies investigating the performance of LLMs on noisy or obfuscated tasks, primarily focusing on their resilience to input noise. Researchers see this as a topic of particular interest in the context of LLMs’ interactions with humans, where noise is likely to be introduced, and might cause negative effects. Wang et al. [13] investigate the resilience of LLMs to five types of input noise, on tasks from the MMLU dataset [14]. Khandalkar et al. [15] tested the performance of LLMs when noise is added to the ARC dataset [16]. Both studies report performance degradation across all models considered when noise is introduced and aim to improve robustness. Crucially, the rate of added noise was limited, with both papers only introducing noise levels equivalent up to about $10 \mathrm { - } 3 5 \%$ of the range we examine. In contrast with prior studies, which have focused on improving LLM resilience to noise, we explicitly use performance decay under extreme obfuscation as a symptom of dataset contamination and overfitting. This highlights a gap in the literature surrounding evaluations of LLMs under extreme conditions and their consequences. To our knowledge, the angle of using performance decay analysis as a contamination detection strategy has not been explored in previous work. # III. METHODOLOGY We conducted an experiment to examine the overperformance of LLMs on obfuscated tasks, systematically attempting to reproduce this phenomenon across a range of tasks and large language models, recording outcomes and intermediate stages. # A. Datasets The dataset designated by us as LeetCode (Old) or OldLC was downloaded from HuggingFace [7], [17]. We selected the first 20 questions from the dataset to be included in our experiment. The questions were originally sourced from the LeetCode platform [18]. The LeetCode (New) or NewLC dataset was compiled by us, using the 20 most recent LeetCode questions at the time [18]. All questions were published in March 2025. The dataset comprises publicly available tasks and metadata accessed from the LeetCode Problemset [18]. We also included a dataset of non-coding tasks. The MATH dataset, released in 2021, is one of the most well-known mathematical problem-solving benchmarks [8]. It is used very frequently in LLM evaluations [4]. The dataset was downloaded from HuggingFace [17], [19]. We selected the first 20 questions from the dataset to be included in our experiment. # B. Obfuscation methods We considered multiple methods for obfuscating tasks through text augmentation. We identified the open-source nlpaug library as the best starting point, given the multiple implemented methods for textual augmentation, designed to be used in machine learning experiments. We selected Typos, which simulates human typing errors, and Deletions, which randomly removes a proportion of words. As this functionality was not available in the nlpaug library, we implemented Truncation as a simple function removing text beyond a given point. We use each obfuscation method to create 10 new versions of each task, obfuscating the text with increasing rates of augmentation. Figure 2 shows selected examples of obfuscated tasks. Given a signed 32-bit integ A coding task obfuscated with Truncation (0.9 aug. rate) Are given height. vertical lines (,)(,[]). Find -, contains. can. that not container. A coding task obfuscated with Deletion (0.7 aug. rate) wplv\$ \ [\ Xqrh $\{ \ { \ 1 \ + \ } \ \backslash$ xqrh {2 + \ QqTt $\{ \textbf { x } \} \ \ \ \ \} \ \ \} \ = \ \backslash$ sar5 [3] $\{ \begin{array} { c c c c c c c } { { 1 } } & { { + } } & { { \setminus } } & { { \mathsf { s e r f } } } & { { \{ \mathbf { x } \} } } & { \} } & { { \setminus } } & { { \setminus } } & { { \big ] } } \end{array} \}$ A math task obfuscated with Typos (0.5 aug. rate) $\begin{array} { r c l c r c l c r c l } { { 5 } } & { { 5 } } & { { \setminus } } & { { [ 1 } } & { { \ddots } } & { { + } } & { { | } } & { { = } } & { { | } } & { { ( + ) } } & { { | } } & { { . } } & { { \setminus } } \end{array} \} ~ \land ~ { \Large \begin{array} { r c l } { { } } & { { \odot } } & { { | } } & { { + } } & { { | ~ . } } & { { \odot } } \end{array} }$ A math task obfuscated with Deletion (0.5 aug. rate) Is $\begin{array} { r l r } { \mathfrak { H } \mathfrak { L } \left( { \bf x } \right) } & { { } = } & { 3 \hat { { \bf \sigma } } \{ { \bf x } \hat { { \bf \sigma } } 2 - 3 \} } \end{array}$ A math task obfuscated with Truncation (0.8 aug. rate) # C. Large Language Models The models were chosen to come from a diverse range of developers, sizes, and inference costs. To ensure our evaluations of the NewLC datasets are not affected by contamination, we only considered models with knowledge cutoffs and release dates before March 2025. The set chosen represents the relative leaders of the industry [20], [21]. Claude 3.5 Haiku, Anthropic, April 2024 [22]) DeepSeek V3 (DeepSeek, December 2024 [23]) Gemini 2.0 Flash (Google, February 2025 [24]) • Llama 3.3 (Meta, December 2024 [25]) GPT-4o-mini (OpenAI, July 2024 [26] The experiment used the OpenRouter platform [21] to send API requests to the LLM providers. The LLMs were prompted with the obfuscated task, along with brief instructions asking them to provide either Python code that solves this question or mathematical reasoning with an indicated final answer. In cases where the response was not in the requested format, they were re-prompted up to 3 times. If an LLM did not respond with a parsable response after that, it was marked as unable to answer that question correctly. # D. Metric For coding tasks, input-output pairs present after the task were designated as test cases, usually 2-3 per question. To be considered correct, the LLM solution had to pass all test cases in automated evaluation. As LLM-generated code is generally considered unsafe [27], we evaluated it in a sandboxed environment, blocking a predefined set of potentially malicious keywords and imports [28]. We did not record any instances where code was blocked from executing. For mathematical tasks, only the final answer was compared. To be considered correct, the LLM answer string had to match the original solution exactly, with minor formatting differences allowed. # E. Implementation The software needed to implement and carry out the experiments is published in a GitHub repository associated with this project: https://github.com/radzim/obfuscated. An overview of the experiment pipeline is presented in Figure 3. Fig. 2: Examples of obfuscated tasks. Fig. 3: Simplified experiment pipeline. # F. Human baselines In order to establish a baseline for what the expected behaviour should be, we asked 4 researchers with a high level of coding and math skills to report what percentage of tasks they believed to able to understand under different obfuscation methods and levels, by showing them randomly selected examples. This is not the same metric as used in the LLM evaluation, as repeating the same experiment is impractical due to time constraints and memorisation. It is not directly comparable to actual performance, however, it provides a useful comparison baseline. # G. Adversarial tasks In order to further test our hypotheses, we manually crafted tasks that closely resemble well-known questions from the LeetCode dataset, but are in fact asking very different questions. Considering the previously mentioned “median of two arrays” problem, we present an adversarial example in Figure 4. Write Python code to solve the following problem: Given two arrays nums1 and nums2 of size m and n respectively, return the medians of the two arrays. The overall run time complexity should be. Constraints: $0 < = \mathrm { ~ m ~ } < = \mathrm { ~ 1 0 0 0 ~ }$ $0 ~ < = ~ \mathrm { ~ n ~ } ~ < = ~ 1 0 0 0$ $1 < = \mathrm { ~ m ~ } + \mathrm { ~ n ~ } < = \mathrm { ~ 2 0 0 0 ~ }$ $- 1 0 6 < =$ nums1[i], nums[2] $< = ~ 1 0 6$ This problem is functionally different from the original in two key ways: it makes no mention of the inputs being sorted, and it asks for two separate medians of the two arrays, not one common. We will be asking the LLMs to solve it, observing if they are more likely to reason about it correctly, or rather eagerly pattern match to a similarly-looking known task. Fig. 4: Adversarial task resembling the structure of the “median of two sorted arrays” task. Fig. 5: The performance of LLMs on the two LeetCode datasets, averaged across the three obfuscation methods. Performance across the augmentation methods is presented in Appendix A. We successfully conducted the experiment, creating 600 new tasks through augmentation to be used along the 60 original ones, eliciting $3 3 0 0 ~ \mathrm { L L M }$ responses, and generating 3300 task evaluation scores. These intermediate artefacts are made available in the project’s GitHub repository: https:// github.com/radzim/obfuscated. # IV. RESULTS We report aggregate performance metrics for each combination of dataset, model, and augmentation rate. We report average scores achieved over the 20 tasks in each dataset and the 3 obfuscation methods considered. More detailed results reporting scores separated into the 3 obfuscation methods are presented in Appendices A and B. For coding tasks such as this, in our human baseline, tasks obfuscated with an augmentation rate above 0.5 were judged to be impossible to solve by any of the participants. We believe many of the solutions presented by LLMs are not legitimate, but rather rely on recognising patterns and responding with solutions to previously seen problems. Some interesting examples of particularly impossible-looking solutions are highlighted in the next section. # A. LeetCode datasets The overall result we were expecting in this part of our setup was accuracy decaying with augmentation, eventually reducing to zero when key details become obfuscated. This is consistent with the behaviour we see in the NewLC dataset, with a $49 \%$ performance decay at 0.3 augmentation, and a $100 \%$ decay at 0.8 and above. In contrast, the performance on the OldLC dataset did not suffer with increasing augmentation as much as expected. The performance on 0.3-augmented tasks was only $5 \%$ lower than the original, and even some 1.0-augmented tasks were still being correctly solved. The performance on both LeetCode datasets is illustrated in Figure 5. The accuracy on the NewLC dataset is much lower across all augmentation rates, and while the LLMs are not able to overcome the most extreme levels of obfuscation, they show very good error-correcting capabilities. We hypothesised whether the OldLC solutions could possibly be due to legitimate error-correcting capabilities of LLMs. We evaluated this using adversarial examples, and showed that gravitation towards previously seen tasks dominates error correction. Details of this are included in Appendix D. We compare the relative performance decay of LLMs across the two LeetCode datasets. These contain very similar tasks, coming from the same source, but differ substantially in time of release (2015 vs 2025). The OldLC dataset is almost certainly in the training set of all of the LLMs examined, and it has been a key benchmark for years [3]. The New LeetCode dataset contains questions released after the release dates of all of the LLMs, which makes it unlikely for any of the questions to have been included in the training set. Based on the comparison of the OldLC and NewLC datasets shown in Figure 6, we hypothesise that a slower rate of decay can be used as a sign of overtraining or inclusion in training data. The comparison between the decay on OldLC and NewLC highlights the stark difference in model behaviour under increasing augmention. Despite 900 attempts, not a single LLM had solved any task obfuscated at above 0.7 in the New dataset, while this has been achieved on the Old dataset in $1 3 . 6 \%$ of cases - 122 times. Fig. 7: The average performance of selected LLMs on the 20 tasks in the dataset, averaged across the three obfuscation methods. # B. MATH dataset The MATH dataset pre-dates the release of the LLMs and their knowledge cutoffs. Therefore, we are unable to compare the relative performance losses between questions included in the training sets and new ones. We evaluate the performance of LLMs on obfuscated tasks from the MATH dataset to examine if the behaviour we identified above in the OldLC dataset is present in other types of tasks. According to the established human baselines, the highest augmentation rate where any of the researchers believed to be able to understand some questions was 0.4 for this dataset. Our evaluation presented in Figure 7 shows that a substantial number of questions are being solved far beyond that point, with many correct solutions even at the highest levels of obfuscation. Fig. 6: Comparison of the average performance decays of LLMs on the two Leetcode datasets. Fig. 8: Comparison of the average performance decays of LLMs (evaluated) and humans (self-reported) on the MATH dataset. Performance under specific obfuscation methods is detailed in Appendix B. In Figure 8, showing performance decay, we do see a steep initial drop, which we didn’t see in the case of the OldLC dataset. This could suggest a lower degree of overtraining or perhaps not being included in the training set at all. However, the good performance on highly obfuscated tasks disproves that possibility. We hypothesise that the length of questions has key impact here, as MATH tasks on average were 4-6 times shorter than LeetCode tasks. Longer strings of information are known to be significantly more resilient to noise through language redundancy [29]. Given a signed 32-bit integ “Reverse Integer”- solved by all 5 LLMs evaluated Are given height. vertical lines (,)(,[]). Find -, contains. can. that not container. “Container With Most Water” - solved by Claude 3.5, DeepSeek V3, and Gemini 2.0 wplv\$ \ [\ Xqrh $\left[ { \begin{array} { l l l } { 1 } & { + } & { \setminus } \end{array} } \right]$ xqrh $\{ 2 + \cdot \setminus \mathbb { Q } \mathbb { q } \mathrm { T } \ t$ $\{ \textbf { x } \} \quad \} \ \} \ = \ \setminus$ sar5 [3] {1 + \ s@rf {x} }. \ ] Answer: 49 - solved by DeepSeek V3 and Gemini 2.0 $\begin{array} { r c l c r c l c r c l } { { 5 } } & { { \lessgtr } } & { { \searrow } } & { { \bigl \backslash } } & { { \bigl [ \mid } } & { { \searrow } } & { { + } } & { { \mid } } & { { = } } & { { \mid } } & { { ( + ) } } & { { \mid } } & { { . } } & { { \searrow } } & { { \mid } } & { { + } } & { { \mid ~ . } } \end{array}$ \$ Answer: 1 - solved by Gemini 2.0 and Llama 3.3 Is $\begin{array} { r l r } { \mathfrak { H } \mathfrak { L } \left( { \bf x } \right) } & { { } = } & { 3 \hat { \bf \Phi } \{ { \bf x } \hat { \bf \Phi } 2 - 3 \} } \end{array}$ Answer: even - solved by Gemini 2.0 # C. Statistical testing Table I presents some statistical properties of performance decay seen by our experiments. We measured the average augmentation level required to cause a $50 \%$ and a $100 \%$ drop in model performance, a linear model fit for the gradient of the decay function, and the average performance decay across all augmentation levels. Confidence interval width was calculated using parametric resampling, modelling the results for each augmentation level for each model as coming from a binomial distribution. TABLE I: Comparison of several decay metrics on the three datasets. We highlight the $50 \%$ decay statistic as most useful at this stage, due to its simple definition and calculation, large contrasts, and relatively low variance. Examining the confidence intervals for the two LeetCode datasets, we find that the differences in LLM behaviour between these two similar datasets are significant, with OldLC decaying slower in all four metrics considered. # D. Illustrative Examples As shown in Figure 9, each of the examples originally presented in Subsection III-B was solved by at least one of the LLMs, despite not containing sufficient information to be solved using legitimate reasoning. The LLMs were also tasked with the adversarial example we constructed in Subsection III-G, asking them to find the two median values of two unsorted arrays. Surprisingly, all 5 LLMs queried replied with code for finding a single median of two sorted arrays - a clearly incorrect answer to the simple question, influenced by the stylistically similar task in its training data. This example illustrates how LLMs can be influenced by irrelevant patterns, causing them to respond to relatively simple tasks in unpredictable ways. # E. Performance of different LLMs We compared the initial performance and rate of decay across the 5 LLMs used in evaluation. This analysis is presented in Appendix C. While we observed some pairwise statistically significant differences, we found all 5 LLMs to exhibit the same characteristic behaviour and trends across all datasets and obfuscation methods considered. # F. Characteristics of datasets The impact of obfuscation on the performance of LLMs on the 3 datasets was significantly different, with all 5 LLMs showing the quickest initial decline in MATH, followed by NewLC, and lastly OldLC. It is difficult to compare effects between datasets, as beyond the issues of overtraining and contamination, the decay and overall behaviour could be affected by question style, length and information redundancy. However, as the two LeetCode datasets are very similar in all of the aforementioned aspects, we can make a meaningful comparison. # V. DISCUSSION LLMs can solve tasks obfuscated beyond recognition and missing key details. This indicates that this is being done through memorisation and eager pattern matching rather than through genuine reasoning. Benchmarking models on contaminated datasets is therefore unreliable and likely overestimates real performance. We have shown that this behaviour exists on different types of tasks, in both coding and mathematical reasoning. We showed that some correct solutions are not positive indicators of model capabilities, but rather artefacts of overtraining, casting doubt on the legitimacy of other solutions offered by the model. We add to the voices calling for dynamic benchmarks and controlled datasets which allow for real assessments of LLM behaviour and capabilities, and for key evaluation datasets to be accompanied with semantically similar yet unsolvable variations, to be used as a control sample against contamination. The rates at which performance decays with task augmentation differ significantly between contaminated and new datasets. While it is not a new finding that many LLMs are severely overfit to training data and their performance can’t be trusted on public benchmarks [10], the severity of this is yet to be understood, along with its impact on properties like model robustness or sensitivity to changes in user input. LLM performance and resilience to obfuscation sharply decrease with obfuscation in unseen datasets, confirming contamination issues and invalidating their performance on previously seen data. While this behaviour may appear to indicate strong performance, it is misleading and driven by overfitting and memorisation rather than true reasoning. In real-world applications, this eager pattern matching to familiar tasks can negatively impact performance on unseen problems. We show that this overcorrection towards training problems can cause unpredictable and opaque behaviour, introducing potential safety risks, particularly when LLMs are deployed in critical systems. LLMs are increasingly deployed in multi-agent systems, where agents exchange messages with one another to achieve a goal. Humans must keep control of these systems, especially when they are critical. However, the findings of this study suggest that LLMs can operate with certain success, exchanging messages that humans cannot understand (i.e., obfuscated text), mirroring the famous example where two agents developed by Facebook in 2019 began communicating in their language [30]. This lack of understanding affects how humans interpret these multi-agent systems, undermining human oversight and control. This issue is known as intellectual debt when we know that a system operates as expected but do not understand how [31]. A lack of mechanisms to mitigate the intellectual debt threatens the sustainability and adoption of AI-based software systems [32], [33].
This paper investigates the ability of large language models (LLMs) to recognise and solve tasks which have been obfuscated beyond recognition. Focusing on competitive programming and benchmark tasks (LeetCode and MATH), we compare performance across multiple models and obfuscation methods, such as noise and redaction. We demonstrate that all evaluated LLMs can solve tasks obfuscated to a level where the text would be unintelligible to human readers, and does not contain key pieces of instruction or context. We introduce the concept of eager pattern matching to describe this behaviour, which is not observed in tasks published after the models' knowledge cutoff date, indicating strong memorisation or overfitting to training data, rather than legitimate reasoning about the presented problem. We report empirical evidence of distinct performance decay patterns between contaminated and unseen datasets. We discuss the implications for benchmarking and evaluations of model behaviour, arguing for caution when designing experiments using standard datasets. We also propose measuring the decay of performance under obfuscation as a possible strategy for detecting dataset contamination and highlighting potential safety risks and interpretability issues for automated software systems.
[ "cs.LG", "cs.SE" ]
# 1 Introduction Deploying Deep Learning (DL) models in real-time applications is challenging due to their high computational demands, particularly on edge devices such as smartphones and IoT systems (Szegedy et al., 2017; Deng et al., 2020; Krishnamoorthi, 2018). Traditional DL models, while effective, often exceed the resource capacities of these devices, causing issues like increased latency, higher energy consumption, larger memory requirements, and privacy risks (Han et al., 2015; Howard et al., 2017; Sze et al., 2017). LLMs like GPT-4 and DeepSeek-R1 achieve remarkable performance but are computationally expensive, making efficient model compression essential for resource-limited environments (Chen and Varoquaux, 2024). This highlights a critical research gap: while significant attention has focused on developing large models, the systematic improvement of compression techniques for practical deployment remains under-explored. Our work addresses this gap by introducing an enhanced Knowledge Distillation (KD) framework that maintains model interpretability while achieving significant compression. The paper is structured as follows: Section 2 details the methodology, including the teacher-student framework, KD approach, Attention Transfer (AT), and the incorporation of integrated gradients for data augmentation. Section 3 explains the experimental procedure, including the dataset used and the hyperparameter search for KD and AT. Section 4 presents the experimental results, demonstrating the efficacy of our technique using MobileNet-V2 on the CIFAR-10 dataset. Finally, Section 5 concludes by discussing potential future research directions in model compression for edge AI applications and the broader implications of our findings. # 1.1 Model Compression Model compression techniques are generally divided into four categories: model pruning, parameter quantisation, low-rank factorisation, and knowledge distillation. Table 1 summarises these methods, which address various needs such as reducing model size, improving computational speed, or maintaining accuracy under constraints (Cheng et al., 2017; Molchanov et al., 2016; Hubara et al., 2016; Jaderberg et al., 2014). Recent reviews (Wang et al., 2024, 2020a; Liu et al., 2022) highlight that while methods like pruning and quantisation often require a trade-off between accuracy and efficiency, KD offers a more balanced solution. By transferring knowledge from a larger ‘teacher’ model to a smaller ‘student’ model, KD enables the student to retain much of the performance of the teacher while significantly reducing computational requirements. This makes KD particularly effective for maintaining both efficiency and performance in resource-constrained environments (Hinton et al., 2015). Table 1 Most widely-used model compression techniques. # 1.2 Explainable AI Explainable AI (XAI) techniques seek to make machine learning models transparent and interpretable for human users. While various XAI approaches exist, such as LIME (Ribeiro et al., 2016) and SHAP (Lundberg and Lee, 2017), we focus on Integrated Gradients (IG) and attention mechanisms for their particular advantages in knowledge distillation. These techniques not only provide insights into model decision-making but also offer practical benefits for guiding compression. Interpretability is especially crucial in model compression, as practitioners need to verify that compressed models maintain both accuracy and fidelity to the original decision-making process. This is particularly important in highstakes domains such as healthcare, where compressed models deployed on edge devices must remain explainable. For example, a compressed model for pneumonia detection from chest X-rays should highlight the same suspicious regions as the full model to assist clinicians in validating AI recommendations. In our approach, we combine these interpretability tools with knowledge distillation. Attention mechanisms ensure the student model learns to focus on the same regions as the teacher, while IG provides pixel-level attribution maps that guide feature learning. This integration of XAI with knowledge distillation provides both performance benefits and improved interpretability, which we quantify in our experimental results. # 1.3 Previous Works Our literature analysis examined model compression through knowledge distillation in image classification from 2017 onwards, focusing on compression factors and accuracy metrics across multiple datasets. The studies revealed a complex relationship between model compression and performance, with compression factors ranging from 1.4x to 127x and varied accuracy impacts ranging from -8.63% to $+ 1 . 5 4 \%$ . Figure 1 shows the relationship between compression factors and accuracy changes across different studies and datasets. Clear trade-offs between model size reduction and performance emerge, with studies like Chen et al. (2019) showing that outcomes vary significantly with dataset complexity. Our analysis found a weak negative correlation ( $r = - 0 . 1 1 4 , p =$ 0.414) between compression factor and accuracy loss, indicating that higher compression doesn’t consistently lead to larger accuracy drops—some highly compressed models maintain strong performance while others experience significant degradation. Table 2 presents statistical data across different datasets, revealing that simpler datasets achieve higher compression with minimal accuracy loss (MNIST: 44.7x average compression, -0.37% median accuracy change), while complex datasets show more modest compression (ImageNet: 7.5x average, -1.40% median accuracy change). Notably, some studies report student models outperforming their teachers in specific tasks (Ashok et al., 2017; Gou et al., 2023), suggesting that distillation can sometimes refine model performance beyond simple compression or that the teacher was not properly trained. Recent developments have focused on more sophisticated distillation techniques to better balance these trade-offs. Studies such as Gou et al. (2022, 2023) introduced multi-level and hierarchical distillation, offering finer control over compression-accuracy balance. Choi et al. (2020) explored adaptive distillation strategies that dynamically adjust based on task complexity, improving performance on challenging datasets. Most relevant to our work, Wu et al. (2023) utilised Integrated Gradients to transfer attribution-based knowledge in NLP tasks. While their approach incorporated IG in the loss function for BERT models, our research applies a different methodology to image classification using MobileNet-V2. We adapt IG as a data augmentation technique rather than an explicit loss term, guiding the student model toward critical focus areas within images. By precomputing IG for the dataset, we significantly reduce computational demands during training. Our methodology also evaluates knowledge distillation across student models of varying compression rates, providing insights into scalability and adaptability. In this work we make the following contributions: • Propose a novel model compression method using integrated gradients to guide the learning of the smaller model and compare it to attention transfer. Fig. 1 Difference in accuracy between the teacher model and the student model, as a function of the compression factor for the studies included in our review. The black dots are the teacher accuracies linking to the performance of the student models. A longer and flatter line means better results. Articles that did not report a compression factor were excluded. The red dashed line represents the mean compression factor of 11.41. The blue dashed line represents the mean accuracy of 83.46. Each marker colour represents a different dataset. • Introduce a more standardised approach to evaluate the performance of model compression algorithms. • Perform Monte Carlo simulations showing that the improvements by KD, AT, and IG are statistically significant even under variation of the training data. Table 2 Mean and median for the change in accuracy from teacher to student (∆Accuracy) and Compression Factor (CF) of different datasets. Fig. 2 Knowledge distillation process using integrated gradients for data augmentation. The teacher model (green) employs a temperature hyperparameter $T = \tau$ where $\tau > 1$ in its softmax function to produce soft targets, which, along with the hard labels from the dataset, guide the training of the student model (blue). Integrated gradients (brown) are overlaid with the original images to generate enhanced data that focuses critical features that the student model should use during training. # 2 Methodology Our approach combines three key components to achieve efficient model compression while maintaining interpretability: KD for transferring model knowledge, AT for preserving spatial understanding, and IG for featurelevel guidance. Figure 2 provides an overview of how these components work together in our framework. We first describe each component individually, then detail their integration and implementation. # 2.1 Knowledge Distillation Framework Knowledge Distillation transfers information from a teacher model to a student model through two primary mechanisms: (1) soft targets generated by the teacher model with temperature scaling and (2) intermediate representations that capture the internal processing of the teacher. The core mechanism leverages ‘softened’ output probabilities that reflect the confidence levels of the teacher model across all classes, revealing the relational structure between categories. For example, when classifying an image as ‘car’, the teacher might assign meaningful probabilities to visually similar classes like ‘truck’, helping the student understand both certainty and ambiguity in predictions. The temperature hyperparameter in the softmax function controls this softening process, with higher values broadening the output distribution to prevent overconfidence and reveal more of the uncertainty from the teacher. The objective function for standard knowledge distillation is defined as: $$ { \cal L } _ { K D } = ( 1 - \alpha ) { \cal L } _ { \mathcal { H } } + \alpha { \cal L } _ { K } \underline { { { c } } } . $$ Here $L _ { \mathcal { H } }$ represents the cross-entropy loss between the student predictions and the ground-truth hard labels, and $L \kappa c$ is the Kullback-Leibler divergence between the softened output distributions of the teacher and student, i.e. soft labels, scaled by temperature $T \in [ 1 , 2 0 ]$ . The hyperparameter $\alpha$ weights the two signals. For details, see Gou et al. (2021). We selected MobileNetV2 as our teacher model due to its optimal balance between accuracy (93.9% on CIFAR-10) and efficiency (2.2M parameters). Its inverted residual architecture provides a feature-rich structure ideal for knowledge transfer while maintaining practical deployment potential in resource-constrained environments. # 2.2 Attention Transfer Building upon the base KD framework, we incorporate attention transfer to ensure the student model learns to focus on the same important regions as the teacher. In neural networks, attention mechanisms reveal which regions of the input the network prioritises during decision-making. These attention maps provide insights into the reasoning process of the model and can be derived from activations at various layers within the network. When integrating attention transfer, we extend the objective function to include an attention loss term that aligns the spatial focus of the student with that of the teacher: $$ L _ { T o t a l } = ( 1 - \alpha ) L _ { \mathcal { H } } + \alpha L _ { K \mathcal { L } } + \gamma L _ { \mathrm { A T } } $$ where $L _ { \mathrm { A T } } = \| A _ { S } - A _ { T } \| _ { 2 } ^ { 2 }$ represents the Mean Squared Error between the $L _ { 2 }$ -normalized student and teacher attention maps, derived from the middle layer activations, and $\gamma$ controls the weighting of this attention alignment in the overall loss function. A few representative attention maps are visualised in Figure S1 (Online Resource 1). # 2.3 Integrated Gradients as Data Augmentation To further enhance the knowledge transfer process, we introduce IG-based data augmentation, as defined in Sundararajan et al. (2017). Integrated gradients provide pixel-level attribution maps that identify which input features most influence the decisions of the model. The IG for an input feature $i$ of the image is calculated as: $$ I G _ { i } ( x ) = ( x _ { i } - x _ { i } ^ { \prime } ) \int _ { \beta = 0 } ^ { 1 } \frac { \partial F ( x ^ { \prime } + \beta ( x - x ^ { \prime } ) ) } { \partial x _ { i } } d \beta $$ where $x \in [ 0 , 1 ] ^ { C \times H \times W }$ is the input image tensor of shape channels (C), height (H), and width (W), $x ^ { \prime }$ the baseline (typically a zero tensor of identical shape), $F$ the model function, and $\beta$ a scaling hyperparameter that transitions from baseline to input. Rather than using IG as an explicit loss term, we implement it as a data augmentation technique, where IG maps are overlaid onto the original images with a controlled probability, as illustrated in Figure 3. The IG augmentation is done in three steps (scaling, normalisation to $[ 0 , 1 ]$ , image overlay): $$ \begin{array} { r l } & { I G _ { \mathrm { s c a l e d } } ( x ) = I G ( x ) ^ { s } \quad \mathrm { w i t h ~ s c a l e ~ f a c t o r ~ } s \sim \exp ( \mathcal { U } [ \ln ( 1 ) , \ln ( 2 ) ] ) , } \\ & { \qquad I \hat { G } ( x ) = \frac { I G _ { \mathrm { s c a l e d } } ( x ) - \operatorname* { m i n } ( I G _ { \mathrm { s c a l e d } } ( x ) ) } { \operatorname* { m a x } ( I G _ { \mathrm { s c a l e d } } ( x ) ) - \operatorname* { m i n } ( I G _ { \mathrm { s c a l e d } } ( x ) ) } , } \\ & { x _ { \mathrm { a g u m e n t e d } } = \left\{ \begin{array} { l l } { 0 . 5 \cdot x + 0 . 5 \cdot I \hat { G } } & { \mathrm { w i t h ~ p r o b a b i l i t y ~ } p , } \\ { x } & { \mathrm { o t h e r w i s e } . } \end{array} \right. } \end{array} $$ The scale factor $s \in [ 1 , 2 ]$ is drawn from a log-uniform distribution to obtain more evenly spread values across orders of magnitude, avoiding bias toward larger values that a linear uniform distribution would produce. This range was selected through empirical validation: scaling factors above 2 resulted in excessive feature enhancement that left some samples with no identifiable important pixels after normalisation, while factors below 1 introduced excessive noise by emphasising less discriminative regions, thereby degrading the quality of the attribution guidance. Figure 4 demonstrates how different scaling techniques affect the distribution of normalised IG values. The integrated gradients are overlaid with probability $p$ to the whole image to ensure that the model also sees unaltered images and learns how to classify them. When the image is overlaid with the integrated gradients the intensity of all pixels is first halved and then the intensity of important pixels is increased. This targeted emphasis helps the model prioritise impactful features, improving both efficiency and interpretability. The final training objective when combining KD, AT, and IG augmentation remains as in Equation 2 with IG influencing the learning process through the modified input data rather than an additional loss term. Fig. 3 Implementation of IG as a data augmentation technique on CIFAR-10. The top row shows the original images from various classes. The middle row displays the Integrated Gradients, highlighting areas of the image that significantly influence the predictions of the teacher model. The bottom row presents the overlaid images, which combine the original images with their respective integrated gradients to emphasise regions of interest, so that the student can more easily focus on these influential areas. The approach is designed to be scalable to larger datasets like ImageNet. While the precomputation time increases linearly with dataset size, it remains a one-time cost that significantly reduces the overall training time. Our experiments on ImageNet subsets (Section 3.1.2) demonstrate the effectiveness of the approach on more complex data distributions, suggesting its viability for full ImageNet-scale models. # 2.4 Use of Artificial Intelligence A large language model (Claude 3.7 Sonnet by Anthropic) was used in the drafting of some sections of this manuscript. After the initial training of our models and analysis of results, we used Claude to assist in organising and articulating our findings, particularly in the Methods and Results sections. All AI-generated text was thoroughly reviewed, edited, and verified by the authors to ensure accuracy and alignment with our research findings. All data analysis, figures, and technical content were produced directly by the authors without AI assistance. # 3 Experiments # 3.1 Data # 3.1.1 Training and Testing Sets For our experiments, we used the standard CIFAR-10 dataset which consists of 60,000 $3 2 \times 3 2$ colour images divided into 10 classes with 6,000 images per class. Following the standard protocol, we used 50,000 images for training and 10,000 for testing, maintaining the original balanced class distribution. For hyperparameter optimisation, we split CIFAR-10’s standard training set (50,000 images) into training (40,000 images, $8 0 \%$ ) and validation (10,000 images, $2 0 \%$ ) subsets. We used this validation split to determine optimal values for knowledge distillation hyperparameters $( \alpha , T )$ , integrated gradients overlay probability $( p )$ , and attention transfer weight $( \gamma )$ . Once optimal hyperparameters were identified, we retrained all models using the complete 50,000-image training set and evaluated final performance on the separate 10,000-image test set. Fig. 4 Distribution of normalised IG using different scaling techniques. Top row: Grayscale attribution maps for standard normalised IG (left), IG with minimum log scale factor 0.6 $( \mathrm { I G } _ { m i n }$ , middle), and IG with maximum log scale factor 2.0 $\mathrm { T G } _ { m a x }$ , right). Middle row: The same attribution maps overlaid on the original image, showing how different scaling affects feature visibility. Bottom row: Frequency histograms of attribution values for each scaling technique. The standard IG shows a right-skewed distribution, $\mathrm { I G } _ { m i n }$ displays a more gradual decline across the 0-0.4 range expanding medium attributions, while $\mathrm { I G } _ { m a x }$ concentrates values near zero, highlighting only the strongest features. The minimum log scale (0.6) enhances feature visibility by better distributing attribution values in the midrange while preserving important structures. The CIFAR-10 dataset was selected for its moderate complexity, balanced classes, and widespread use as a benchmark in image classification tasks. Its compact image size ( $3 2 \times 3 2 )$ also allows for efficient experimentation while still presenting meaningful classification challenges that benefit from the subtle feature distinctions that our IG-enhanced approach aims to capture. # 3.1.2 Validation Set from ImageNet To assess the generalisation capabilities of our models beyond their training domain, we created a validation set from ImageNet classes that correspond to CIFAR-10 categories. This pairing of CIFAR-10 with ImageNet serves multiple purposes: (1) it evaluates performance on more diverse intra-class variations and (2) it assesses whether the feature importance mechanisms learned on simpler data generalise to more complex examples. Our validation approach involved several carefully considered steps. First, we identified ImageNet classes that semantically align with each CIFAR-10 category. For example, ImageNet classes such as ‘Airliner’, ‘Warplane’, and ‘Airship’ were mapped to the CIFAR-10 ‘Plane’ class. To isolate the effect of model compression from potential teacher model errors, we performed a preliminary evaluation using the teacher model and retained only those samples correctly classified by the teacher. This filtration process ensures that performance degradation observed in compressed models can be attributed specifically to compression effects rather than inherent limitations of the teacher model on challenging samples. Out of the 5250 total ImageNet samples in our validation set, the teacher model correctly predicted 3537 images (67.37% accuracy), with the remaining 1713 incorrectly predicted images being excluded from our analysis. Table S13 (Online Resource 1) provides the complete mapping between CIFAR-10 classes and their corresponding ImageNet categories. Notably, two CIFAR-10 classes (‘Deer’ and ‘Horse’) did not have direct one-to-one correspondences in ImageNet, reflecting the taxonomic differences between the datasets. # 3.2 Hyperparameter Optimisation We conducted an extensive grid search to identify optimal hyperparameters for knowledge distillation, integrated gradients, and attention transfer. Our experiments maintained a consistent architectural configuration with a compression factor of 4.1, reducing the model from 2.2M to 543K parameters. To ensure statistical reliability, each configuration underwent 10 independent training runs long enough to account for initialisation variance. Table 3 presents the complete search space for all hyperparameters, with ranges selected based on preliminary experiments and established literature values. Note that we, in our ablation study, also test all combinations of KD, IG, and AT, including combinations corresponding to the weights being zero. # 3.2.1 Knowledge Distillation Hyperparameter Search The knowledge distillation optimisation focused on two key hyperparameters: the distillation weight $\alpha$ and temperature $T$ . The selected ranges for $\alpha$ , as shown in Table 3, were chosen to examine both subtle and strong influences of the guidance from the teacher model. Similarly, the temperature values were selected to investigate varying degrees of softness in probability distributions. # Springer Nature 2021 LATEX template Table 3 Hyperparameter search space for model compression. The table presents the explored ranges for KD, IG, and AT hyperparameters. Each hyperparameter range was selected based on preliminary experiments and literature review to balance model performance and training stability. $T$ represents the temperature for softening probability distributions, $\alpha$ represents the knowledge distillation loss weight, $p$ represents the overlay probability for integrated gradients, and $\gamma$ represents the attention transfer weight coefficient. # 3.2.2 Integrated Gradients Optimisation Building upon the knowledge distillation framework, we investigated the impact of overlay probability $p$ for integrated gradients-based data augmentation. This hyperparameter governs how frequently IG maps are applied to training images, requiring careful calibration to balance attribution information with the preservation of original image characteristics. # 3.2.3 Attention Transfer Optimisation The attention transfer investigation centred on the weight hyperparameter $\gamma$ , which determines the contribution of attention map alignment in the loss function. As detailed in Table 3, we examined a broad range of values to understand the full spectrum of attention transfer influence during student model training. # 3.3 Ablation Study To isolate the contributions of individual components, we conducted a comprehensive ablation study testing various combinations of KD, IG, and AT. Each configuration used the optimal hyperparameters identified in the previous section and was evaluated over 10 independent runs to ensure robust performance assessment. For this study, we used $1 0 0 \%$ of the standard training data (50,000 images) for each run, with variation coming only from random weight initialisation and training processes. The configurations tested included standalone approaches (KD alone, IG alone, AT alone) as well as combinations (KD & IG, KD & AT, IG & AT, and KD & IG & AT), allowing us to assess both individual and synergistic effects between components. # 3.4 Compression Factor Analysis We systematically evaluated the trade-off between model size and performance by testing student models with compression factors ranging from 2.2 (relatively modest compression) to 1121.7 (extreme compression). For moderate compression factors (2.2x-12.04x), we conducted 10 independent training runs per configuration to ensure robustness, while more extreme compression factors were evaluated with 3 training runs each. Unlike our previous experiments which reported results from the best of multiple runs, this analysis reports the average performance to better represent the expected outcomes in practical deployment scenarios. This methodological choice eliminated potential selection bias that might obscure the true relationship between compression and accuracy degradation. The compression levels were achieved through progressive layer removal from the MobileNetV2 architecture, maintaining the early feature extraction layers while systematically reducing the network depth. This approach creates a controlled experiment where the fundamental architectural characteristics remain consistent while the model capacity is progressively reduced. Additionally, we measured both training time per epoch and inference time across all three GPU configurations (RTX 3060 Ti, RTX 3090, and RTX A5000) to evaluate how computational efficiency scales with model compression under different hardware environments. These comprehensive measurements provide insights into the practical deployment considerations beyond accuracy metrics alone. # 3.5 Monte Carlo Simulation To rigorously assess the statistical robustness of our approach, we conducted Monte Carlo simulations comprising 60 independent runs for each configuration (Student baseline, KD, KD & IG, and KD & IG & AT). Unlike the ablation study, each Monte Carlo run used a randomly selected 80% subset of the training data (40,000 images) while maintaining evaluation on the full test set (10,000 images). This methodology reveals the distribution of potential outcomes rather than single point estimates, assesses model robustness to variations in training data, provides statistical confidence intervals for performance metrics, and eliminates potential selection bias from fortuitous initialisations or data splits. The stochastic nature of deep learning models—arising from random weight initialisation, mini-batch selection, and optimisation dynamics—means that identical architectures and hyperparameters can yield different results across training runs. Our Monte Carlo approach captures this inherent variability, enabling a more comprehensive assessment of the performance characteristics of each configuration. The choice of 60 runs represents a carefully considered balance between statistical power and computational feasibility. Our analysis indicates that reducing to 50 runs would widen the $9 5 \%$ confidence interval from $\pm 0 . 1 5 \%$ to $\pm 0 . 1 7 \%$ (a $1 3 \%$ increase in uncertainty), while requiring approximately $1 7 \%$ less computation time. Conversely, increasing to 70 runs would narrow the confidence interval to $\pm 0 . 1 4 \%$ (a 7% improvement in precision) but require 17% more computational resources. Given diminishing returns in statistical precision beyond 60 runs and our computational constraints, this run count provides an optimal balance between robust statistical analysis and practical implementation. A detailed analysis of the relationship between run count and confidence intervals for the KD & IG configuration (4.12 compression factor) is provided in Table S8 (Online Resource 1). # 3.6 ImageNet Subset Evaluation Following the creation of the ImageNet validation subset described in Section 3.1.2, we evaluated all model configurations on these diverse images. This cross-dataset evaluation serves as a strong test of generalisation capabilities, assessing whether the knowledge transferred from teacher to student extends beyond the specific characteristics of the training data. The evaluation used the best-performing model from each configuration, applying it directly to the ImageNet subset without any fine-tuning or domain adaptation. To address the resolution discrepancy between ImageNet’s standard $2 2 4 \times 2 2 4$ pixel images and CIFAR-10’s 32 $\times$ 32 pixel format, we utilised a downsampled version of ImageNet at 32 $\times$ 32 resolution, ensuring compatibility with our model architectures without requiring structural modifications. # 3.7 Computational Infrastructure All experiments were conducted using an NVIDIA GeForce RTX 3090 GPU (24GB VRAM), AMD Ryzen Threadripper 1950X CPU (16 cores/32 threads), and 96GB system RAM. With this hardware configuration, training a student model for 100 epochs required approximately 21 minutes, while precomputing integrated gradients for the entire CIFAR-10 training set took approximately 2 hours. The Monte Carlo simulations with 60 runs per configuration required approximately 84 GPU-hours in total. For the compression factor analysis, we utilised additional server configurations: an NVIDIA GeForce RTX 3060 Ti (8GB VRAM), AMD Ryzen 5 5600X CPU (6 cores/12 threads), and 16GB system RAM; and an NVIDIA RTX A5000 (24GB VRAM), AMD Ryzen Threadripper PRO 5955WX CPU (16 cores/32 threads), and 504GB system RAM. This approach allowed us to evaluate how inference and training times varied across different hardware environments as compression factors increased. This distributed infrastructure provided sufficient computational capacity to maintain consistent experimental conditions across all evaluations. # 4 Results and Discussion # 4.1 Hyperparameter Optimisation Results Our systematic grid search across multiple hyperparameters revealed optimal configurations for knowledge distillation, integrated gradients, and attention transfer components. These findings establish the foundation for our subsequent model compression experiments and are included in Online Resource 1. # 4.1.1 Knowledge Distillation Hyperparameter Optimisation The results of our KD hyperparameter optimisation revealed an optimal configuration of $\alpha = 0 . 0 1$ and $T = 2 . 5$ , as summarised in Table S10 (Online Resource 1) and visualised in Figure S2 (Online Resource 1). The surface plot demonstrates that lower $\alpha$ values (0.01-0.05) consistently preserve model performance during knowledge distillation. This finding suggests that subtle influences from the soft targets of the teacher model provide more effective guidance than stronger distillation weights. The optimal temperature of $T = 2 . 5$ indicates that moderate softening of probability distributions strikes an effective balance between preserving class relationships and maintaining sufficient categorical distinction. # 4.1.2 Integrated Gradients Optimisation Building upon the optimal KD hyperparameters, our investigation of IG overlay probabilities revealed 0.1 as the optimal value, achieving $9 2 . 6 \%$ accuracy. See Table S11 for details (Online Resource 1). This relatively sparse application of integrated gradients effectively balances feature emphasis with model generalisation. Higher probabilities (0.25, 0.5) demonstrated performance degradation, suggesting that excessive attribution information may cause the model to overemphasise specific features at the expense of learning diverse representations. Conversely, reducing the probability below 0.1 resulted in insufficient guidance about important features, as evidenced by the decline in performance at $p = 0 . 0 9$ . # 4.1.3 AT Hyperparameter Optimisation The optimisation of the attention transfer weight $\gamma$ yielded an optimal value of 0.8, achieving $9 2 . 4 \%$ accuracy, see Table S12 (Online Resource 1) for details. This relatively high weighting of attention map alignment in the loss function demonstrates the significance of attention transfer in guiding student model learning. The performance curve exhibits a clear peak at $\gamma = 0 . 8$ , with notable degradation both above and below this value, indicating a sensitive optimum that effectively balances attention-based and standard classification objectives. # 4.1.4 Attention Map Analysis Visual inspection of attention maps in Figure S1 (Online Resource 1) reveals that student models consistently display more concentrated attention patterns than the teacher model, despite low MSE values (0.0008-0.0054). This intensity disparity occurs because MSE measures relative attention patterns rather than absolute values, with student models focusing attention more intensely on fewer pixels. The automobile class consistently shows higher MSE values (0.0045-0.0054) across all methods, suggesting this class presents particular challenges for attention transfer due to its complex feature structure and diverse orientations. # Springer Nature 2021 LATEX template Table 4 Comparison of testing accuracies across different methods for the student model with 4.1 compression factor. ∆Acc. is the difference between the testing accuracy of the teacher and the highest testing accuracy of the student model. Statistical significance was assessed using paired t-tests against the student baseline across 10 independent runs using $1 0 0 \%$ of the training data. The KD & IG & AT approach shows more consistent performance across all classes, particularly improving on traditionally difficult classes like ship (reducing MSE from 0.0027 to 0.0019 compared to AT alone). This demonstrates that while MSE provides a useful quantitative metric, the qualitative aspects of attention distribution—such as focus area consistency and feature highlighting—also play important roles in effective knowledge transfer. # 4.2 Ablation Study Table 4 presents the accuracies of various configurations for the student model with 4.12 compression factor. The teacher model achieves $9 3 . 9 \%$ , while the student model without distillation reaches $9 1 . 6 \%$ . These results are based on 10 independent runs using the complete training dataset (50,000 images), which accounts for the generally higher accuracies compared to our Monte Carlo simulations that used only $8 0 \%$ of the training data. To establish statistical significance, we conducted paired t-tests comparing each approach to the student baseline. The paired test design accounts for the dependency between observations, as all configurations were evaluated on the same test sets. Applying KD alone improves accuracy to $9 2 . 3 \%$ ( $p = 0 . 0 3 0$ ), representing a substantial gain of 0.8 percentage points over the compressed model baseline. Among all configurations, KD combined with IG achieves the highest accuracy of $9 2 . 6 \%$ ( $p < 0 . 0 0 1$ ), with a $\Delta$ Acc of - $1 . 8 9 \%$ relative to the teacher, and a statistically significant improvement of 1.1 percentage points over the student baseline. This configuration demonstrates that IG enhances distillation by guiding the student to focus on critical features, achieving the highest relative improvement of $4 4 . 8 \%$ . The KD & IG & AT combination yields $9 2 . 4 2 \%$ accuracy ( $p ~ < ~ 0 . 0 0 1$ ), closely trailing KD & IG. Interestingly, while this combined approach was expected to produce the best results by leveraging all three techniques, the marginal decrease in performance compared to KD & IG suggests potential interaction effects or slight overfitting when all mechanisms are employed simultaneously. This may be due to competing optimisation objectives between attention transfer and integrated gradients, where the focus of AT on spatial attention patterns could occasionally conflict with the emphasis of IG on feature-level attributions. Standalone configurations of IG and AT produce lower accuracies of $9 2 . 0 1 \%$ and $9 1 . 6 \%$ respectively, while their combination (IG $\&$ AT) achieves $9 1 . 8 \%$ . Statistical analysis shows that KD $\&$ IG ( $p < 0 . 0 0 1$ ), KD & AT ( $p = 0 . 0 0 6$ ), and KD & IG & AT ( $p < 0 . 0 0 1 _ { . }$ ) provide significant improvements over the baseline, while IG $\&$ AT ( $p = 0 . 0 8 0$ ), IG ( $p = 0 . 4 0 4$ ), and AT ( $p = 0 . 8 9 5$ ) do not reach statistical significance at the conventional $p < 0 . 0 5$ threshold. This ablation study confirms that KD serves as the foundation of our compression framework, with IG providing significant complementary benefits. The KD & IG configuration emerges as the most effective, demonstrating that IG enhances feature-level alignment between teacher and student, resulting in superior accuracy and interpretability. These results are particularly notable in the context of edge device deployment, where the 4.1x reduction in model size translates to proportional decreases in memory requirements and inference time, with only a 1.9 percentage point accuracy drop from the teacher model. # 4.3 Compression Factor Analysis Our analysis reveals clear patterns in how increasing compression affects model performance across different training configurations, as illustrated in Figure 5. The performance-compression relationship demonstrates both the capabilities and limitations of our approach, with several distinct operational ranges emerging from the data. In the moderate compression range (2.2 $\times$ to $1 2 \times$ ), both KD and KD & IG demonstrate remarkable stability, maintaining accuracies above 96% of the teacher’s performance even at $1 2 \times$ compression—a significant achievement considering the model size reduction to just $8 . 3 \%$ of the original architecture. The detailed view in the inset figure highlights that at 4.1 $\times$ compression, our primary experimental configuration, KD $\&$ IG achieves approximately $9 8 . 6 \%$ of the teacher model’s accuracy while substantially reducing computational demands. Beyond 28 $\times$ compression, all configurations experience accelerated performance degradation, as shown by the steeper decline in the accuracy curves. The differences between techniques become less pronounced, with the performance curves converging. This pattern suggests that at extreme compression levels, the fundamental limitations of model capacity overshadow the benefits of sophisticated knowledge transfer approaches. This becomes particularly evident at our highest tested compression ratio $( 1 1 2 2 \times )$ , where all configurations experience a significant drop to around $5 5 \substack { - } 6 0 \%$ of the teacher model’s accuracy. At this extreme compression level, the performance differences between Student, KD, IG, and KD & IG models become minimal, indicating that the benefits of knowledge transfer are largely neutralised when model capacity is severely constrained. Importantly, Fig. 5 Testing accuracy (solid lines, left axis) and inference speed-up (dashed line, right axis) as functions of compression factor for different model configurations. The main graph shows the performance-compression trade-off across the full range of compression factors (1x to $\mathrm { 1 1 2 2 x }$ ), while the inset provides a detailed view of the moderate compression range (2.2x to $1 2 \mathrm { x }$ ). KD $\&$ IG maintains a consistent performance advantage in this critical range while delivering computational speed-ups that exceed the compression ratio. Note that all data points represent models trained only once, so stochastic fluctuations are present. KD & IG maintains a consistent performance advantage throughout most of the compression spectrum, particularly in the $4 { \mathrm { - } } 3 0 \times $ compression range—the most relevant range for practical edge deployments. Figure 5 also illustrates the computational efficiency gains (dashed line, right axis), showing that speed-up factors exceed the compression ratios, with our $4 . 1 \times$ compressed model achieving approximately 10 $\times$ inference speedup, and our most compressed model $( 1 1 2 2 \times )$ achieving over $1 0 0 \times$ speedup. These results demonstrate that our method provides reliable and predictable performance-size trade-offs that can be effectively tailored to specific deployment requirements. Detailed computational efficiency and inference time analyses are provided in Figures S5–S6 and Table S9 (Online Resource 1), with full performance metrics available in Table S15 (Online Resource 1). # 4.4 Monte Carlo Simulation Figure 6 illustrates the distributions of testing accuracies from 60 Monte Carlo simulations for four configurations, providing robust statistical evidence of performance differences. The baseline student model achieves a mean accuracy of $9 0 . 0 5 \%$ (median $9 0 . 0 6 \%$ ), representing the performance of our compressed architecture without advanced knowledge transfer trained on only 80% of randomly picked images from the training set. KD improves the mean accuracy to $9 0 . 6 5 \%$ (median $9 0 . 6 4 \%$ ), demonstrating the value of soft targets in guiding student learning. Most notably, KD with IG yields a mean accuracy of $9 1 . 2 9 \%$ (median 91.24 $\%$ ), confirming it as the highest-performing configuration with a statistically significant improvement of 1.24 percentage points over the student baseline ( $p < 0 . 0 0 1$ ). The KD & IG & AT configuration achieves a mean accuracy of $9 0 . 8 9 \%$ (median 90.88%), performing better than KD alone but not matching KD & IG. The paired $\mathrm { t }$ -tests (Table 5) confirm the statistical significance of the observed improvements, with all distillation approaches showing significant gains over the student baseline ( $p < 0 . 0 0 1$ ). The KD & IG configuration shows the highest t-statistic (14.80), indicating the most robust improvement. Table 5 Statistical analysis of the testing accuracies obtained from the Monte Carlo simulation results comparing different approaches for the student model with 4.1 compression factor. Paired t-tests were conducted against the Student baseline using data from 60 independent runs, each using $8 0 \%$ of the training data. Interestingly, the KD & IG configuration shows higher variance (std. dev. $0 . 5 5 3 \%$ ) than other approaches, suggesting that while it achieves the highest mean performance, it may be more sensitive to data subset selection and initialisation conditions. This characteristic indicates that in deployment scenarios where consistent performance is prioritised over maximum accuracy, the more stable KD & IG & AT approach (std. dev. 0.297%) might be preferable despite its slightly lower mean accuracy. These results confirm our approach delivers consistent improvements across different training conditions, independent of initialisation or dataset variations. # 4.5 ImageNet Subset Evaluation Our evaluation on the curated ImageNet subset demonstrates the broad generalisation capabilities of our approach, as shown in Figure S4 (Online Resource 1). The KD & IG configuration achieves 85.7% accuracy on ImageNet, significantly outperforming the 83.8% accuracy of the baseline student model, while maintaining strong performance relative to the $1 0 0 \%$ baseline of the teacher model. This performance advantage is particularly noteworthy given the substantial domain shift between training and evaluation conditions. While trained exclusively on CIFAR-10’s $3 2 \times 3 2$ pixel images, our models maintain robust performance when evaluating ImageNet’s more challenging $2 2 4 \times 2 2 4$ pixel images, which exhibit greater intra-class variation and complexity. The improved accuracy of KD & IG over standalone KD (85.0%) and IG $( 8 5 . 1 \% )$ Fig. 6 Distributions of testing accuracies from Monte Carlo simulations across various methods. The histograms depict the performance variability of four configurations: Student (baseline), KD, KD $\&$ IG, and KD & IG & AT. The mean and median testing accuracies are indicated for each method, showcasing the influence of knowledge distillation, integrated gradients, and attention transfer on model performance. approaches suggests that our combined methodology helps models learn more robust and transferable features. The consistent superior performance of KD $\&$ IG across both CIFAR-10 $( 9 2 . 6 \% )$ ) and ImageNet $( 8 5 . 7 \% )$ datasets, as detailed in Table 6, indicates that our approach enhances the ability of the model to identify and leverage class-relevant features rather than dataset-specific characteristics. This cross-dataset generalisation capability is crucial for real-world applications where deployment conditions may differ significantly from training scenarios. Table 6 Performance comparison of different knowledge distillation configurations on CIFAR-10 and ImageNet subsets
Model compression is critical for deploying deep learning models on resource-constrained devices. We introduce a novel method enhancing knowledge distillation with integrated gradients (IG) as a data augmentation strategy. Our approach overlays IG maps onto input images during training, providing student models with deeper insights into teacher models' decision-making processes. Extensive evaluation on CIFAR-10 demonstrates that our IG-augmented knowledge distillation achieves 92.6% testing accuracy with a 4.1x compression factor-a significant 1.1 percentage point improvement ($p<0.001$) over non-distilled models (91.5%). This compression reduces inference time from 140 ms to 13 ms. Our method precomputes IG maps before training, transforming substantial runtime costs into a one-time preprocessing step. Our comprehensive experiments include: (1) comparisons with attention transfer, revealing complementary benefits when combined with our approach; (2) Monte Carlo simulations confirming statistical robustness; (3) systematic evaluation of compression factor versus accuracy trade-offs across a wide range (2.2x-1122x); and (4) validation on an ImageNet subset aligned with CIFAR-10 classes, demonstrating generalisability beyond the initial dataset. These extensive ablation studies confirm that IG-based knowledge distillation consistently outperforms conventional approaches across varied architectures and compression ratios. Our results establish this framework as a viable compression technique for real-world deployment on edge devices while maintaining competitive accuracy.
[ "cs.CV", "cs.AI", "cs.LG", "68T05, 68T07", "I.2.6; I.4.2; I.4.9" ]
# 1 Introduction Along with the digitalization of healthcare and significant advancements in radiology, various multimedia data like videos, images and texts stored in the Picture Archiving and Communication Systems (PACS) of hospitals are increasing faster than ever. When facing difficult cases in clinical routines, radiologists tend to look at previous cases to determine a diagnosis. However, most access to such data are based on patient identification or keyword-based queries (such as modality, organ, symptom, etc) [14]which limit the potential usage of these data. Sometimes it can be hard to summarize certain cases simply using few keywords. To reduce the inaccuracy of text-based search and to fully exploit the intrinsic value of the accumulated data, there is a growing need to build efficient content-based image retrieval (CBIR) systems which accept an image as a query and returns a set of similar cases as references for doctors. Content-Based Image Retrieval (CBIR) is a technique for retrieving images from large databases based on the visual content rather than metadata like keywords or descriptions. Performance evaluation plays a crucial role for CBIR systems [16] since it enables the comparison of different systems and allows for analysis of how these systems perform under various application scenarios. However, there is still a lack of universally accepted benchmarks for CBIR, making it difficult to compare different systems objectively. Most of the existing evaluation methods so far rely on synthetically produced ground truth or manual annotations. Generally the goal is to identify whether the retrieved images are relevant or not by comparing if the retrieved images share the same label as the query image. In fact, such a binary defined relevance remains sub-optimal since images are very complex information carriers that convey much more information compared with single labels. Therefore, some researchers have proposed to evaluate the performance of CBIR systems using local semantic concepts contained in images (e.g., “tumors”, “bones”, “vessels”), since the semantic content of concepts is richer than that of single labels [20–25]. However, only a few of them [12, 18] take the subtle relationships between semantic concepts into consideration when measuring similarity between images. In this case, all the concepts are equally treated as independent. Sometimes semantically close concepts such as “cat” and “feline” are regarded as totally isolated concepts. In this work, we aim to design a semantically-aware relevance measure for images that can be integrated into the evaluation of CBIR tasks. We anchor this work here in the thematic field of medical imaging but our study remains generic and generalizable to other fields. To this end, we use external Knowledge Graphs (KGs) to model the complex relationships between semantic concepts (e.g., “artery”, “vessel”) which can be extracted from the descriptive text of medical images (e.g., radiological case reports or descriptive captions in the literature). We use the shortest path between each concept to calculate the distance between various concepts and define a non-binary relevance measure between images based on approximate matching. This measure is finally combined with the Normalized Discounted Cumulative Gain (NDCG) method which is widely accepted in the broader field of information retrieval. This article is organized as it follows. Sec. 2 introduces existing methods for CBIR evaluation. In Sec. 3, we present our contribution aiming at evaluating CBIR performance using semantically-aware relevance for medical images with the integration of external knowledge. In Sec. 4, an experimental evaluation is proposed both for the proposed relevance measure (named nn-IoU) and its computational cost in the context of CBIR. Finally, in Sec. 5, some discussions and research perspectives are provided. # 2 Related works Evaluating content-based image retrieval (CBIR) systems remains a significant challenge. As a user-oriented tool, the ultimate performance measure should always be user satisfaction. But apparently such subjective measure is not only hard to define but also vary greatly between different users and applications. Some researchers aimed to design interactive methods to account for user feedback but such works are inherently limited by perception subjectivity [15, 27]. Therefore, objective and standardized evaluation metrics have been widely regarded as the most reliable and fair means of performance measurement. In the following of this section, we discuss some label-free(unsupervised) metrics and some other metrics supervised by either single label or multi-labels in the literature. Some unsupervised metrics tend to generate synthetic queries by applying data augmentation techniques (e.g., rotation, noise) to original images and evaluate whether the system could retrieve the original (untransformed) image or similar images [6]. But such synthetic queries may not reflect real user intents and the lack of standardization makes it hard to compare results across studies. Other label-free metrics often aim to quantify the retrieval performances by measuring the distance between query images and retrieved images based on low-level features like colors and pixels intensity [9]. However, such unsupervised metrics focus on low-level visual similarity and fail to capture semantic relevance. As a result, supervised metrics are generally preferred for evaluating CBIR systems by leveraging labeled data, especially in domains like medical imaging where semantic understanding are critical. Since retrieval and classification tasks share many similarities in common, many classical metrics for image classification have been adapted to CBIR tasks like precision and recall [16]. However, both measures are imperfect since images contain a vast amount of diverse visual information including color, texture and object composition. Precision and recall may not fully capture the nuances of these features using only a single label. Although measures like F1-score and mean average precision have also been proposed to balance precision and recall and to account for ranking order, such measures commonly used in classification tasks remains sub-optimal because classification tasks always deal with a limited number of categories. In fact, retrieval tasks do not deal with fixed classes of items and the real intents of the users can be diverse and complex. For instance, if a user searches for "a person riding a bicycle in a park on a rainy night", measures like precision and recall are unlikely to fulfill the user’s intent because such a label rarely exists in any dataset. Furthermore, classification metrics often presume that all retrieved images can be binarily regarded as either relevant or irrelevant simply by comparing if retrieved images share the same label as the query image. In fact, since an image contains much more information than a single label, relevance between images should take into account non-binary similarity if possible (e.g., highly relevant, somewhat relevant, irrelevant). Instead of using a single label, some researchers have discussed how to measure the relevance of images based on multiple semantic concepts contained in images [20,22]. Vogel et al. [22] proposed to divide one image into 10 $\times$ 10 patches and to detect local objects/concepts contained in each image patch separately. In this case, an image can be represented, for example, as 40% sky + $3 0 \%$ grass $+ ~ 3 0 \%$ buildings. Thus, the relevance between images can be determined by a combination of semantic concepts instead of single labels. Serieys et al. [20] proposed to represent a medical image using semantic concepts extracted from the descriptive text of that image, and they used the Intersection over Union (IoU) between two sets of concepts to measure the non-binary relevance between medical images. However, such a measure remains imperfect because it neglects the subtle relationships between semantic concepts (e.g., “vessel” and “artery” which represent very close concepts, can be treated as totally independent terms). Meanwhile, such common issues can be solved by approximate matching. Approximate matching [3] refers to a data processing technique often used to find strings that approximately match a given pattern, rather than exactly match it. This is particularly useful when dealing with data that may contain typos, variations in spelling with the integration of Levenshtein Distance which measures the number of single character edits (insertions, deletions, or substitutions) required to change one string into the other. Similarly, such a approximate matching strategy could also be used to distinguish close but unique medical concepts like “vein” and “vessel” using specific similarity measures. In this case, Knowledge Graphs (KG) are crucial for measuring concept similarity because they provide a structured and semantic representation of relationships with rich context. They enable accurate, interpretable, and domain-specific similarity measures. Various methods have been discussed to measure node similarity using ontologies [10], e.g., path-based methods tend to measure the similarity of two nodes based on the shortest path between them or their closest common ancestor [18]. Graph embedding-based methods tend to directly calculate the cosine similarity of two nodes after representing the nodes as vectors using graph embedding techniques (e.g., Node2Vec [7], TransE [2]). In this work, we propose a semantically-aware measure for CBIR evaluation by introducing external KG to better quantify the relevance between images using semantic concepts extracted from the descriptive text of corresponding images. # 3 Proposed measure: nn-CUI@K In this section, we will present the underlying methodological concepts of the proposed measure (named nn-CUI@K) and details of its implementation. Given a dataset of medical image-caption pairs, we first extract medical concepts, labeled as Concept Unique Identifiers (CUIs), from the descriptive text of the corresponding image. From these CUI-labeled images, we then introduce an approach relying on external KG to measure the distance between each CUI and further compute the relevance between images based on approximate matching for two sets of CUIs. Fig. 1: CUI extraction in the ROCOv2 dataset [19]. # 3.1 CUI (concept) extraction The CUI is the basic component of the Unified Medical Language System (UMLS) [1] which can be regarded as a huge vocabulary encompassing a wide range of medical concepts. In the UMLS, each medical concept is assigned a CUI, consisting of the letter "C" followed by seven digits (e.g., the CUI C0042449 refers to the concept Veins). For each image in a medical image-caption pair dataset, we first extract CUIs from the corresponding descriptive text using Named Entity Recognition tools like MedCAT [11]. Such a prerequisite has already been done for some datasets (e.g., ROCO [17]). # 3.2 Distance measurement between CUIs Previous measures proposed in the literature often neglect the relatedness between CUIs. However, some semantically close CUIs may be regarded as totally independent or even isolated CUIs (e.g., “C0042449:veins” and “C0005847:blood vessels”). To avoid such misleading judgments when computing relevance between medical images using CUIs, we choose the KG stored in the UMLS to model the complex relations between CUIs and to measure the distance between CUIs in the next steps. Since the KG contains comprehensive semantic networks of medical domain, we choose to measure the distance between CUIs $x$ and $y$ simply according to the length of the shortest path connecting the two CUIs: $$ d i s t \left( x , y \right) = l e n \left( s h o r t e s t \_ p a t h ( x , y ) \right) $$ e.g., for directly connected CUIs in the KG like $C O 0 4 2 4 4 9$ : veins and C0005847:blood vessels, the distance $d i s t \left( C 0 0 4 2 4 4 9 , C 0 0 0 5 8 4 7 \right) = 1$ . For not directly connected CUIs like C0006121:brain stem and C0018670:head, dist $\prime C 0 0 0 6 1 2 1 , C 0 0 1 8 6 7 0 ) =$ 3 since the shortest path between them is “C0006121 -C0006104 -C0926510 - C0018670 ”. It needs to be noticed that we chose a directed acyclic graph to compute the distances and other measures can also be considered as mentioned in Sec. 2. # 3.3 Semantic-aware relevance measure: nn-IoU Some measures have been used to evaluate the similarity between two sets of medical concepts such as IoU proposed by [20] and Bipartite Matching (BM) proposed in [10]. However, IoU only considers the intersection of two sets of concepts as relevant and overlooks the cases where different concepts can also be extremely similar (e.g., “C0042449:veins” and “C0005847:blood vessels”). On the other hand, while BM takes into account the relatedness between each CUI, it fails to notice the varying degree of relevance especially when CUI set $A$ is a subset of CUI set $B$ . Considering the above limitations, we propose a novel measure called nearest neighbor-based Intersection over Union (nn-IoU): $$ \operatorname { n n - I o U } \left( A , B \right) = { \frac { \left| A \cap B \right| + \lambda * \left| r e l ( A , B ) \right| } { \left| A \cup B \right| } } $$ where $A \cup B$ and $A \cap B$ refer to the total CUIs and the overlapped CUIs for CUI sets $A$ and $B$ . Besides, $r e l ( A , B )$ denotes the (directly) connected CUIs (also called neighbours) between CUI sets $A$ and $B$ , which can be computed as shown in the algorithm 1. $\lambda$ is a coefficient between zero and one to balance the importance for identical overlapping CUIs and the directly related CUIs. # Algorithm 1: Identifying related concepts Input: CUI sets A & B; distance threshold $n$ Output: $r e l ( A , B )$ 1 # Initialize $r e l ( A , B )$ as an empty set; 2 $r e l ( A , B ) = \emptyset$ ; 3 for $a \in A$ do 4 for $b \in B$ do 5 # Select nearest neighbours; 6 if $\mathop { d i s t } ( a , b ) \leq$ threshold n then 7 # No CUI appear twice in the numerator; 8 if $( a \not \in A \cap B ) \land ( a \not \in r e l ( A , B ) )$ then 9 add $a$ to $r e l ( A , B )$ 10 if $( b \notin A \cap B ) \land ( b \notin r e l ( A , B ) )$ then 11 add $b$ to $r e l ( A , B )$ 12 return $r e l ( A , B )$ ; Input: medical image-text pair dataset $\{ V _ { i } , T _ { i } \} _ { i = 1 } ^ { N }$ ; CBIR system M Output: nn-CUI@K 1 Extract CUIs from $\{ V _ { i } , T _ { i } \} _ { i = 1 } ^ { N }$ ; 2 $N D C G 0$ ; 3 for $i \gets 1$ to $N$ do 4 $D C G 0$ , $I D C G 0$ ; 5 Use $V _ { i }$ as query image; 6 Get top- $K$ similar images $\{ r _ { j } \} _ { j = 1 } ^ { K }$ using system $M$ ; 7 for $j 1$ to $K$ do 8 $r e l e v a n c e \gets \mathrm { n n - I o U } ( C U I ( r _ { j } ) , C U I ( V _ { i } ) )$ ; 9 $p e n a l t y \gets \log _ { 2 } ( j + 1 )$ ; 10 DCG ← DCG + relevanc ; 11 Get top- $K$ similar images $\{ R _ { j } \} _ { j = 1 } ^ { K }$ using nn-IoU; 12 for $j 1$ to $K$ do 13 $r e l e v a n c e \gets \mathrm { n n - I o U } ( C U I ( R _ { j } ) , C U I ( V _ { i } ) )$ ; 14 $p e n a l t y \gets \log _ { 2 } ( j + 1 )$ ; 15 IDCG IDCG + relevanc ; 16 $\begin{array} { r } { N D C G N D C G + \frac { D C G } { I D C G } } \end{array}$ ; 17 return $\textstyle { \frac { N D C G } { N } }$ # 3.4 Evaluation based on ranking order Based on the relevance defined in Sec. 3.3, we finally compute our proposed semantically-aware measure nn-CUI@K by considering the following steps: 1. For each image in the test set, we extract CUIs from its descriptive caption; 2. With each image as a query, we first compute the relevance score using nnIoU as defined in Sec.3.3 between the set of CUIs paired with that image and the set of CUIs paired with all candidate images. We then rank all candidate images in descending order based on nn-IoU scores and treat this ranking as the ground truth for the image retrieval task; 3. We first compute the sum of the discounted relevance scores for the top $K$ retrieved images, called the Discounted Cumulative Gain (DCG). We then compute the Ideal DCG (IDCG) score using the ground truth ranking; 4. The final score is the average of the Normalized Discounted Cumulative Gain (NDCG) values over all query images. Specifically, given a test set of $N$ medical image-text pairs and a CBIR system $M$ , the nn-CUI@K score over this dataset can be computed as detailed in Algorithm 2. # 4 Experimental study The only difference between the proposed CBIR evaluation metric nn-CUI@K and the CUI@K proposed in [20] is that we compute the relevance of medical images using nn-IoU instead of IoU. To showcase the interest of nn-IoU, we evaluate its performance in the context of CUI-based image retrieval in the following experiments. # 4.1 Datasets ROCOv2 is a multimodal dataset [19] consisting of 60,163/9948/9928 images in the training, validation and test splits. All the images are extracted from PubMed [5] articles with corresponding captions covering a wide range of modalities and organs. Besides, each image is paired with a set of CUIs extracted from its caption. # 4.2 Implementation details Only hierarchical relations (i.e., is_a relations) for the KG from the UMLS metathesaurus are used to measure the distance between each CUI. The length of the shortest paths between all involved CUIs are precomputed offline using NetworkX [8]. The distance threshold $n$ is set to 1. The $\lambda$ value for nn-IoU score is set to 0.5. We justify the hyperparameter settings in the following ablation study. # 4.3 Evaluation for relevance measure nn-IoU To quantitatively evaluate the effectiveness of the proposed relevance measure nn-IoU vs. IoU, we perform image retrieval on ROCOv2, since a well-designed relevance measure should help to identify the most relevant images for a given query. Specifically, we first perform image retrieval based only on the corresponding CUIs using nn-IoU & IoU as the relevance measure. Additionally we also perform image retrieval based solely on textual and visual content instead of discrete CUIs. To do so, we encode each image-text pair using the BioMedCLIP visual and textual encoders [26] respectively, and then retrieve the most similar images based on the cosine similarity of the encoded visual and textual embeddings. We then use Precision@K to evaluate the retrieval performance for each relevance measure. Categorical modality/organ labels are obtained by mapping each class to corresponding CUIs. The experimental scores are presented in Table 1. We can observe that nn-IoU outperforms IoU in all the tasks and the relatively small performance gap may stem from the nature of ROCO dataset (each image is paired with a short caption containing only 3.4 CUIs in average such that the potential relatedness may not be ubiquitous). We also observe that nn-IoU leads to an even better retrieval result compared to the retrieval performance of both textual and visual encoders of BioMedCLIP (except Precision $@ 5$ score for modality&organ) which confirms its superiority in measuring the relevance of medical images using CUIs. Fig. 2: Ablation study of including more neighbours of CUIs for approximate matching by increasing $n$ from 0 to 2 where $n$ denotes the CUI distance threshold. Yellow nodes correspond to the nearest neighbours of the red node. Table 1: Evaluation of image retrieval using various relevance measures on the ROCOv2 test set. BioMedCLIP-v/t refer to visual and textual embeddings encoded using BioMedCLIP. # 4.4 Ablation study As shown in the Equation 2, the major difference between IoU and nn-IoU is the newly added term $r e l ( A , B )$ in the numerator which accounts for the approximate matching by matching each CUI with not only itself but also with its nearest neighbours in the KG. To identify if it is beneficial to add such a term, we aim to quantify the impact of $r e l ( A , B )$ which enables approximate matching. Therefore, in the following ablation study we freeze all the parameters for nn-IoU except the distance threshold $n$ which controls the number of nearest neighbours and the coefficient $\lambda$ which balances the weight of $r e l ( A , B )$ . To highlight the impact of the approximate match, we perform image retrieval tasks on an isolated subset of ROCOv2 where IoU and nn-IoU exhibit inconsistent performance as shown in the Fig. 2. The P $@ 3 0$ experimental scores are shown in Fig. 3. As it can be observed, when $n = 0$ , no nearest neighbour will be counted and in this case nn-IoU is equivalent to IoU, which explains why there is no variation by increasing $\lambda$ . And the areas under curves indicate the performance gap between Iou (n=0) and nn-IoU (n=1 & n=2). We can also observe that the green and red curves are always above the blue one when $\lambda \in [ 0 , 0 . 7 ]$ , which highlights that it is beneficial to add $r e l ( A , B )$ paired with an appropriate weight. We also find that it could be detrimental to assign a higher value for $\lambda$ especially when $n = 2$ as some noise might also be included with a relatively high weight, leading to inaccurate relevance. Fig. 3: $\mathrm { ^ { 2 } @ 3 0 }$ scores in the ablation study. The retrieval precision will keep increasing until $\lambda ~ = ~ 0 . 5$ and $\lambda \ : = \ : 0 . 2$ when increasing $\lambda$ from $0$ to $^ { 1 }$ for $n = 1$ and $n \ = \ 2$ respectively. # 4.5 Time cost of evaluation Compared to IoU, our relevance measure nn-IoU inevitably comes with a higher computational cost by introducing an additional approximate matching for CUIs expressed as term $r e l ( A , B )$ . Since UMLS models the relations between all CUIs using a huge KG, it can be computationally expensive to find the potential nearest neighbours for CUIs by searching for the shortest path between two CUIs. Instead of computing the distances between the individual CUIs on the fly, we propose to compute the necessary distances between the CUIs off-line and to store the nearest neighbours of each CUI in a dictionary. In this case we can avoid the replication computation and $r e l ( A , B )$ can be computed even without KG. We provide the evaluation time cost on ROCOv2 in Fig. 4.
Performance evaluation for Content-Based Image Retrieval (CBIR) remains a crucial but unsolved problem today especially in the medical domain. Various evaluation metrics have been discussed in the literature to solve this problem. Most of the existing metrics (e.g., precision, recall) are adapted from classification tasks which require manual labels as ground truth. However, such labels are often expensive and unavailable in specific thematic domains. Furthermore, medical images are usually associated with (radiological) case reports or annotated with descriptive captions in literature figures, such text contains information that can help to assess CBIR.Several researchers have argued that the medical concepts hidden in the text can serve as the basis for CBIR evaluation purpose. However, these works often consider these medical concepts as independent and isolated labels while in fact the subtle relationships between various concepts are neglected. In this work, we introduce the use of knowledge graphs to measure the distance between various medical concepts and propose a novel relevance measure for the evaluation of CBIR by defining an approximate matching-based relevance score between two sets of medical concepts which allows us to indirectly measure the similarity between medical images.We quantitatively demonstrate the effectiveness and feasibility of our relevance measure using a public dataset.
[ "cs.CV" ]
# 1 Introduction The rise of open-weight foundation models, such as CLIP [42, 22], T5 [43] and the more recent Gemma [56], Llama [16] and DeepSeek [9], has caused a paradigm shift in the field of machine learning. Instead of training a model from scratch as was previously the norm, it is now increasingly common for practitioners and researchers alike to start with a pre-trained foundation model and then fine-tune it on a task of interest [51]. This approach leverages the benefits of transfer-learning, leading to performance and robustness gains. The proposal of multiple parameter-efficient fine-tuning (PEFT) methods [19, 30], which reduce the computational costs of fine-tuning and limit catastrophic forgetting by only updating a subset of the model parameters, further enables this approach. This has lead to a proliferation of different versions of these foundation models and of PEFT adapters, fine-tuned on a variety of downstream tasks, which are openly accessible on public model repositories such as Hugging Face [58] and Adapter Hub [41]. Model upcycling, the practice of reusing existing models to create new, more capable deep learning systems [66, 17], capitalizes on this proliferation of fine-tuned models and adapters. Two upcycling strategies stand out: model merging, and model MoErging. Model merging methods combine multiple fine-tuned versions of the same foundational model into one, preserving the size and therefore the computational and memory requirements of the original pre-trained model while infusing it with multiple new capabilities [32, 24, 21, 62, 65, 7]. The advent of model merging techniques and opensource libraries for merging [25, 15] has had an important impact on the deep learning community, providing a simple, training-free way to create better models from already existing model and adapters. In the past year, many of the top performing models on HuggingFace’s Open LLM Leaderboard [3] have resulted from the merging of multiple fine-tuned checkpoints [65]. Model MoErging [61] similarly combines multiple adapted experts, but instead of fusing the parameters directly, MoErging approaches such as [37, 34] combine adapters into modular, mixture-ofexperts (MoE) type layers [47] expanding the model’s size and capabilities. A routing mechanism determines which input, or part of the input, gets processed by which expert modules. For this upcycling strategy further training is often required to let the router and expert adapters learn how to interact with one another. A natural pipeline has therefore emerged to leverage the benefits of transfer-learning and amortize past sunk training costs: large models are pre-trained in an unsupervised fashion on large amounts of general, unlabeled data; these foundational models are then fine-tuned, potentially using PEFT techniques, on specialized datasets or tasks; finally these fine-tuned expert checkpoints or adapters are upcycled and combined to create more capable, often multi-task models. A common assumption is that increased performance at one stage of this pipeline will propagate downstream. In other words, a stronger pre-trained model should yield a stronger fine-tuned model, and similarly, stronger fine-tuned experts should produce a stronger merged / MoErged model. We challenge this assumption in this work by studying the following questions: How does expert training affect upcycling? and Do all capabilities and knowledge transfer equally well? We find that long fine-tuning that optimizes for expert performance can substantially hurt model upcycling, a phenomenon to which we refer as “overtraining” in the context of this paper. While overtrained experts might be better on their respective fine-tuning tasks, they lead to worse performance when merged or when used as initializations for model MoErging. We validate this phenomenon across diverse settings, including merging fully fine-tuned and PEFT models, performing MoErging with LoRA adapters, and in both vision and language domains. Additionally, we identify what type of knowledge gets preserved during model merging. We find that easy examples are correctly classified by merged models while harder data points are overwhelmingly forgotten during the merging process. While some recent work has hinted that undertraining experts can benefit merging performance [38, 68], our work provides a systematic analysis of this phenomenon, and demonstrates how a simple early stopping strategy can significantly improve the efficacy of existing merging and MoErging techniques. Our research introduces a critical new dimension to model upcycling, showing how careful expert training, and targeted checkpoint release can unlock improved performance. Concretely, our contributions are the following: • We show that overtraining full fine-tuned (FFT) models produces sub-optimal merges ( Section 3.1), and that the negative impact is even stronger when using LoRA adapters for parameterefficient fine-tuning (Section 3.2); • We explain this phenomenon through the lens of data difficulty in Section 4, showing that later training steps are primarily guided by the loss of a small fraction of difficult examples which are predominantly forgotten when merging. • We show that for model MoErging, overtraining the constituent experts leads to lower final accuracy after further multi-task training of the modular model (Section 3.4). • We show that a task-dependent training time of experts can bring a further boost in upcycling performance. We propose a simple early stopping strategy that favors expert undertraining. This strategy effectively adapts the training duration for each task, and can recover optimal upcycling accuracy (Section 5). # 2 Preliminaries and methodology # 2.1 Model merging Model merging has recently gained a lot of popularity as a means to combine the abilities of multiple fine-tuned versions of the same pre-trained model into one, preserving the model architecture and size. Formally, a model merging method, Merge, takes the parameters $\theta _ { 0 }$ of the pre-trained foundation model, and parameters $\{ \theta _ { t } \} _ { t \in \mathcal { T } }$ of the multiple experts, which are fine-tuned models on each task $t$ from a set $\tau$ , and outputs the parameters of the merged model $\bar { \theta } = M e r g e ( \theta _ { 0 } , \{ \theta _ { t } \} _ { t \in \mathcal { T } } )$ . A simple example of this combination step is averaging the different fine-tuned models’ parameters: $$ \begin{array} { r } { \bar { \theta } = \frac { 1 } { | \mathcal { T } | } \sum _ { t \in \mathcal { T } } \theta _ { t } . } \end{array} $$ A common challenge in model merging is the observed performance degradation of the merged model $\bar { \theta }$ on individual tasks $t \in \tau$ , relative to the original fine-tuned model $\theta _ { t }$ . This phenomenon has been coined “interference”, and a plethora of merging methods have been proposed to reduce interference when merging models and to preserve as much of the accuracy of the expert models as possible [32, 24, 62, 65, 8, 7]. These methods have mainly focused on modifying the experts parameters $\{ \theta _ { t } \} _ { t \in \mathcal { T } }$ or the respective task vectors $\{ \tau _ { t } \} _ { t \in \mathcal { T } }$ , where $\tau _ { t } = \theta _ { t } - \theta _ { 0 }$ , and / or changing the combination step. We consider 4 popular merging methods: • Average simply averages the parameters of all fine-tuned models following Equation (1); • Task Arithmetic (TA) [21] scales the sum of the task vectors by a scalar $\alpha _ { t }$ , and adds it to the pre-trained model parameters, returning $\begin{array} { r } { \theta _ { 0 } + \sum _ { t \in \mathcal { T } } \alpha _ { t } ( \theta _ { t } - \theta _ { 0 } ) } \end{array}$ ; • TIES [62] prunes low magnitude parameters of each task vector, and then averages the remaining sparse task vectors based on sign alignment: in each parameter dimension, TIES only averages the parameters from each task vector that have the same sign as the weighted majority; • DARE [65] randomly prunes a fraction of each task vector parameters; the remaining sparse task vectors are then rescaled based on the pruning fraction, and are combined as in TA method above. # 2.2 Model MoErging Another popular class of upcycling strategies besides model merging are model MoErging techniques. MoErging methods aggregate multiple fine-tuned experts with the use of modular architectures to build stronger deep learning systems. The large design space of these methods, paired with their effectiveness has led to the rapid development of many new methods in the recent past [61]. A key feature of MoErging approaches is modularity; multiple experts are considered simultaneously and a routing mechanism decides which input, or which part of an input, is processed by which expert. In this work we consider per-token and per-layer routing, following recent works which suggest this leads to better performance relative to other possible configurations [37, 34]. Concretely, let $\mathbf { W } \in \mathbb { R } ^ { d _ { \mathrm { o u t } } \times d _ { \mathrm { i n } } } , b \in \mathbb { R } ^ { d _ { \mathrm { o u t } } }$ denote the weight matrix and bias of a pre-trained linear layer, whose original output is $\mathbf { W } x + b$ . We assume the availability of a fine-tuned expert module $E _ { t } ( \cdot )$ for each target task $t \in \tau$ and we replace the original linear layer with a MoE layer. A router $\pi$ parameterized by matrix $R \in \mathbb { R } ^ { | T | \times d _ { \mathrm { i n } } }$ computes routing logits $R x$ and applies softmax $\sigma ( \cdot )$ to obtain the routing probabilities. The outputs of the experts with top $k$ highest probabilities are then computed and weight-averaged. The resulting MoE layer output is: $$ y = \mathbf { W } x + b + \frac { \sum _ { t \in I _ { k } ( x ) } \pi ( x ) _ { t } E _ { t } ( x ) } { \sum _ { t \in I _ { k } ( x ) } \pi ( x ) _ { t } } , $$ where $I _ { k } ( x ) = \{ t \mid \pi ( x ) _ { t } \in \mathrm { t o p } \mathrm { ~ k ~ }$ elements of $\pi ( x ) \}$ . We use $k = 2$ for our experiments. We consider the “multi-task” setting where we assume access to all the datasets the experts were trained on. After updating every linear layer of the pre-trained model with available adapters, we continue training the MoE-fied model on the multi-task mixture of data by freezing the original model parameters and only updating the router and the expert modules. # 2.3 Low-rank adaptation Modern foundation models have tens, if not hundreds, of billions of parameters, making full fine-tuning impractical on typical hardware [16, 9, 56]. Parameter-Efficient Fine-Tuning (PEFT) updates only a small subset of the parameters to ease the computational burden and curb catastrophic forgetting [19, 30]. Low-Rank Adaptation (LoRA) [19], has emerged as one of the most popular PEFT methods due to its simplicity and effectiveness. LoRA inserts two low-rank matrices $\mathbf { A }$ and $\mathbf { B }$ into selected linear layers of a model. If the input and output dimension at that layer are $n _ { i n }$ and $n _ { o u t }$ , LoRA uses a rank $\dot { r } \ll \operatorname* { m i n } ( n _ { i n } , n _ { o u t } )$ to define matrices $\mathbf { A } \in \mathbb { R } ^ { r \times n _ { i n } }$ and $\mathbf { B } \in \mathbb { R } ^ { n _ { o u t } \times r }$ . The output of that layer then becomes $\begin{array} { r } { ( { \bf W } { \bf x } + { \bf b } ) + \frac { \alpha } { r } { \bf B } { \bf A } { \bf x } } \end{array}$ where $\alpha$ is a scaling hyperparameter. During fine-tuning, the original model parameters are frozen and only the LoRA’s $\mathbf { A } , \mathbf { B }$ matrices are updated. Merging LoRA adapters At each layer, the weight update induced by LoRA is exactly $\begin{array} { r } { \Delta W = \bar { W } _ { \mathrm { f i n e - t u n e d } } - \bar { W _ { \mathrm { p r e - t r a i n e d } } } = \frac { \alpha } { r } \mathbf { B } \mathbf { A } } \end{array}$ . Consequently, standard merging techniques can be directly applied to LoRA-adapted models if the updates $\mathbf { \Delta } _ { \overline { { r } } } ^ { \alpha } \mathbf { B } \mathbf { A }$ are added to the pre-trained weights or if they are directly used to compute the task vectors. Merging the LoRA A and B matrices separately is not recommended since this can lead to mismatched representation spaces resulting in poor performance [52]. Nevertheless, recent work has observed that merging LoRA-adapted models is harder than merging FFT models [55, 52], often leading to significant performance degradation. Model MoErging with LoRA adapters Using LoRA adapters for model MoErging is straightforward, with each adapter being used to define one expert module in the MoE layer. Let $\mathbf { A } _ { t }$ and $\mathbf { B } _ { t }$ denote the LoRA low-rank matrices obtained from fine-tuning on task $t$ , then we can define the expert modules in Equation (2) as $\begin{array} { r } { E _ { t } ( x ) = \frac { \alpha } { r } \mathbf { B } _ { t } \mathbf { A } _ { t } x } \end{array}$ for each task of interest $t \in \mathcal T$ . # 2.4 Data difficulty Prior work has examined how individual data points influence neural network training dynamics and properties such as generalization, memorization, and privacy, leading to the development of various data difficulty scores [28]. These scores have been used for data pruning, i.e. removing certain examples from the training set, without harming test performance [40]. In particular, large fractions of easy examples can be pruned since they contribute little to learning, while removing a small fraction of the hardest examples can improve generalization, as these are likely to be outliers with uncommon features [57], or examples with noisy / incorrect labels [40]. [48] further showed that appropriate data pruning can yield better-than-power-law error scaling with dataset size. A natural relationship exists between data difficulty and deep learning generalization and memorization. For instance, [48] found a 0.78 Spearman rank correlation between EL2N scores [40] and the memorization score presented by [13]. This indicates that, in order to classify difficult examples, models often need to memorize them. This relationship between memorization and generalization has been further substantiated with theoretical results in simpler settings [2, 12]. We utilize data difficulty scores to identify which knowledge is transferred during upcycling. Specifically, we use the EL2N score proposed by [40] which is the norm of the error vector, i.e. the predicted class probabilities minus the one-hot label encoding. The EL2N score of a training example $x$ with one-hot encoded label $y$ is defined to be $\mathbb { E } \| p ( \theta , x ) - y \| _ { 2 }$ , where $p ( \theta , x )$ are the predicted class probabilities of example $x$ by a deep learning model with parameters $\theta$ . # 2.5 Models and datasets Vision domain We use a standard setting for testing the performance of merging methods in the vision domain [21]: a CLIP [42] pre-trained ViT-B-32 model [11] is fine-tuned on 8 image classification tasks: Cars [27], DTD [6], EuroSAT [18], GTSRB [50], MNIST [10], RESISC45 [4], SUN397 [60] and SVHN [35]. The fine-tuning is done with a batch size of 128, the AdamW optimizer [31, 39] and a learning rate of 1e-5. We use a learning-rate scheduler with linear warm-up for the first $10 \%$ of training, followed by cosine annealing. When evaluating merged models, we use the corresponding frozen classification head for each task. Language domain For our natural language processing (NLP) experiments we adopt the setting from the TIES paper [62]. We use pre-trained T5-base models [43] which we fine-tune on 7 tasks: QASC [26], WikiQA [64] and QuaRTz [53] for question answering; PAWS [67] for paraphrase identification; Story Cloze [46] for sentence completion and Winogrande [44] and WSC [29] for coreference resolution. We use the AdamW [31] optimizer with a batch size of 256, a constant lr of 0.0001 and no weight decay. bfloat16 mixed precision training was used to reduce GPU utilization. Evaluation For all our experiments we report the raw, un-normalized test accuracy averaged across the multiple considered tasks. The normalized accuracy is a very common metric used to compare model merging methods [21, 62]. However, because the normalized accuracy depends on both the merged model’s performance and that of the experts, it isn’t suitable for settings like ours where different sets of experts are used and compared. Our experiments are ran using the PyTorch [39] and HuggingFace [58] open source machine learning frameworks on an Nvidia Quadro RTX 8000 GPU with 48GB of memory. Figure 1: Average test accuracy across all 8 vision classification tasks for fully fine-tuned (right) and LoRA-adapted (left) ViT-B-32 models. We plot the average accuracy of the expert models evaluated on their respective tasks as well as merging accuracies for multiple methods. Mean and standard deviation across 3 random seeds and LoRA initializations shown for the merging methods. # 3 Longer fine-tuning hurts model upcycling In this section, we present results challenging the common assumption that better fine-tuned models lead to better upcycling results. We show that overtrained experts lead to worse merged models for both FFT and LoRA, as well as lower accuracy when used to initialize MoErging methods. # 3.1 Merging fully fine-tuned models While a multitude of model merging methods have been proposed, the influence of the fine-tuning procedure itself on merging remains understudied. Most prior works have used similar fine-tuning protocols, typically training for a fixed 2000 steps in the vision setting described in Section 2.5. Instead of proposing yet another model merging method, we take a look at how the number of training iterations affects merging. We fine-tune our vision and NLP models for varying number of training steps $s \in \{ 2 , 4 , 8$ , 16, 32, 64, 128, 256, 512, 1024, 2048} on every considered dataset. Each merge combines either 8 vision experts or 7 NLP experts (one per task) all trained for the same duration. Undertrained experts result in better merging Figure 1 (left) shows that, except for Average, all methods achieve better merging performance when the ViT experts are trained for just 256 training steps, only ${ \sim } 1 / 8$ of the commonly used 2000. TA, TIES, and DARE yield models with ${ \sim } 3 \%$ higher accuracy at 256 steps compared to 2048, a gain comparable to the $3 . 4 \%$ gap between TA and the more sophisticated TIES at 2048 steps. The same conclusions hold in the NLP setting (Figure 2 left), with both TA and TIES peaking around 256–512 training steps. Further training leads to a drop in merging performance of over $3 \%$ for both TA and TIES. Notably, merging undertrained experts with TA outperforms merging experts trained for longer with TIES. Average is the only method that seems to benefit from training the experts longer, but it consistently underperforms overall. Moreover, TA, TIES, and DARE show similar trends across training durations, suggesting that training length itself, rather than the merging method, plays a key role in merging performance. Better experts do not necessarily lead to better merging The black lines in Figure 1 and Figure 2 show the average accuracy of the expert models on their respective fine-tuning tasks. In both the vision and NLP settings, we observe that higher expert accuracy does not necessarily translate into better merging performance. In the vision setting, expert models trained for 256 steps achieve an average accuracy of $8 8 . 4 \%$ , which is $1 . 6 \%$ lower than at 2048 steps $( 9 0 . 0 \% )$ . Nevertheless, merging after 256 steps yields models with approximately $3 \%$ higher accuracy than merging after 2048 steps. The discrepancy is even more pronounced in the NLP setting. Expert accuracy improves from $78 . 2 \%$ at 256 steps to $8 2 . 4 \%$ at 1024 steps, a $4 \%$ gain, yet the merging accuracy of TA and TIES drops by around $3 \%$ over the same interval. # 3.2 Merging LoRA adapters We now extend our previous results to the highly relevant setting of merging LoRA adapters. We find that long training of LoRA experts hurts merging performance even more than in the FFT case. We add LoRA adapters at every linear layer of the original ViT-B-32 and T5 models. We use LoRA rank $r = 8$ , scaling parameter $\alpha = 3 2$ and learning rates 1e-4 and 5e-4 for the ViT and T5 models respectively. We train the LoRAs for different number of steps $s$ to evaluate the impact of training duration on accuracy and mergeability. The parameters of the base model are kept frozen. Figure 2: Average test accuracy across all 7 NLP tasks for fully fine-tuned (right) and LoRA-adapted (left) T5 models. We plot the average accuracy of the expert models evaluated on their respective tasks as well as merging accuracies for multiple methods. Mean and standard deviation across 2 random seeds and LoRA initializations shown for the merging methods. Overtraining severely impairs LoRA merging The right panels of Figure 1 and Figure 2 show expert and merging accuracies for our vision and NLP LoRA models, respectively. For the ViT models, merging performance peaks at 128 training steps (64 for DARE), with accuracies ranging from $6 5 \mathrm { - } 6 7 \%$ across all methods. Although further training improves expert accuracy by about $1 \%$ , it significantly degrades merging performance, with accuracy drops of $5- 6 \%$ for Average, TA, and DARE, and nearly $17 \%$ for TIES. In the NLP setting, different methods reach peak merging performance at different training durations: 512 steps for Average $( 6 6 . 5 \% )$ , 256 for TA $( 6 8 . 5 \% )$ , and 128 for TIES $( 6 8 . 6 \% )$ . Expert models, however, continue to improve, reaching an average accuracy of $8 1 . 9 \%$ at 2048 steps. Despite this, merging at 2048 steps harms performance, with drops of $2 . 5 \%$ , $4 . 6 \%$ , and $9 . 9 \%$ for Average, TA, and TIES, respectively. # 3.3 Effect of LoRA rank We now examine how the LoRA rank affects the degradation effect observed in the previous section. We find that increasing the LoRA rank mitigates the loss in merging accuracy that occurs as experts are trained for longer. We fine-tune ViT-B-32 models on the eight considered image-classification tasks, applying LoRA adapters to every weight matrix while systematically varying the adapter rank $r$ . We employ squareroot scaling for the LoRA factor $\alpha$ , to go from the values $r = 8$ and $\alpha = 3 2$ of the previous section to $( r , \alpha ) \in \{ ( \bar { 1 } 6 , 4 5 )$ , (32, 64), (64, 90), (128, 128), $( 2 5 6 , 1 8 1 ) \}$ . The models are trained for varying number of steps $s \in \{ 8 , 3 2 , 1 2 8 , 5 1 2 , 2 0 4 8 \}$ to assess how training duration interacts with the LoRA rank in terms of merging accuracy. When merging, we combine LoRA-adapted models with the same rank and trained for the same number of steps. The resulting accuracies are plotted in Figure 3. Across all three merging methods (Average, TA, and TIES) increasing the LoRA adapter rank consistently raises merging accuracy at every training duration. Moreover, higher ranks substantially attenuate the accuracy drop associated with extended training: as the number of fine-tuning steps grows, models with larger ranks exhibit smaller declines from their peak merging performance. # 3.4 Model MoErging with LoRA experts We next analyze how the performance of MoE-fied models, initialized with LoRA experts, is affected by the training time of these experts. We use the LoRA adapters obtained in Section 3.2 with different number of training steps to initialize our MoE experts, one LoRA for each task. The routing mechanism is initialized using Arrow [37], where the weight vector associated with each expert is the first right-singular vector of the $B A$ matrix multiplication. These vectors are assumed to determine the direction of most variance induced by expert $E _ { t }$ for $t \in \mathcal T$ in the space of hidden states induced by data from task $t$ . We create one MoE-fied model for each number of steps $s$ , i.e. for each different model we initialize the MoE layers with the expert LoRAs for each task, all trained for $s$ steps. Once the MoE-fied model has been initialized using the fine-tuned LoRAs, we further train the routing mechanism and the LoRA experts in a multi-task fashion for 4000 steps with a peak learning rate of 1e-5, with the base parameters frozen. We report the final, multi-task, accuracies over the 8 classification tasks in Figure 4. Figure 4: Final multi-task accuracy of the MoE-fied models as a function of the number of training steps used to obtain the LoRA experts used for initializing the MoE-fied model. Mean and standard deviation across 3 different initializations of the experts are shown. Figure 3: Average test accuracy across all 8 vision classification tasks as a function of the number of fine-tuning steps for different LoRA ranks and three merging methods. Each panel shows one method: Average (left), Task Arithmetic (center) and TIES (right). Colored solid lines and distinct markers denote the different LoRA adapter ranks. The $\mathbf { \boldsymbol { x } }$ -axis is in $\log _ { 2 }$ scale. Figure 5: Left: Percentage of total loss for examples in different data difficulty bins. Bin 1 represents $10 \%$ easiest examples (lowest EL2N scores), bin 10 represents $10 \%$ hardes examples (highest EL2N scores). Mean across all 8 vision datasets shown. Right: Merging accuracy for experts trained without the hardest examples. Experts are trained on data with EL2N scores from percentile 0 to varying max percentiles in $\{ 9 0 , 9 5 , 9 8 , 9 9 , 1 0 0 \}$ . We observe that the MoE-fied models initialized with overtrained LoRA experts reach about $2 \%$ lower final multi-task accuracy than the models initialized with experts trained for less. Even expert LoRAs trained for as little as 4 steps on their respective tasks reach a higher final multi-task accuracy than those overtrained. We conclude that overtraining experts can hurt downstream MoErging. # 4 Why is undertraining beneficial for merging? Easy examples are learned early during training while harder examples are learned later. To link our main observation to the training duration of the expert models we track the loss of the training examples during training, these results are shown on the left of Figure 5. We group the training examples into 10 bins according to their data difficulty scores, the $10 \%$ of the examples with the lowest EL2N scores are in bin 1, etc. EL2N scores are computed early in fine-tuning, after only 32 steps, across 10 different seeds. We observe that easy examples, which have more common features, are learned early in training. The rest of training is dedicated to learning the more difficult examples. In fact, the top $10 \%$ of hardest examples account for over $50 \%$ of the total loss during most of training. As discussed in Section 2.4, these results imply that in later training steps models try to memorize difficult examples with uncommon features or noisy labels. Model merging leads to the forgetting of difficult examples. To analyze why merging benefits from less training of the expert models we take a look at which examples are forgotten during merging, i.e. which examples from the training set are correctly classified by the expert models but incorrectly classified once these models are merged. We hypothesize that merging primarily affects the classification of difficult examples. Memorizing such examples, with uncommon features or noisy labels, is likely to yield parameter updates which are unique from one dataset to the other, and which will be destroyed by the aggregation step of model merging. Figure 6 shows pie charts of the examples which are forgotten during merging, with each “slice” representing one of ten data difficulty bins. Hard examples are overwhelmingly forgotten when merging, with over $50 \%$ of forgotten data points being in the top $30 \%$ in terms of data difficulty. Figure 6: Proportion of forgotten examples in each data difficulty bin for three different model merging methods. Bin 1 represents $10 \%$ easiest examples (lowest EL2N scores), bin 10 represents $10 \%$ hardest examples (highest EL2N scores). We see that hard examples are overwhelmingly forgotten when merging with all methods, with the $30 \%$ hardest examples representing over $50 \%$ of forgotten examples. From these 2 observations, we conclude that fine-tuning for longer, which mainly helps the experts memorize difficult examples, is not beneficial to merging since those harder examples will most likely be forgotten during the merging procedure. Difficult examples are still necessary for good generalization. We remove difficult examples from expert training to see how this effects merging performance. Past work has determined that removing a small percentage of the most difficult examples can help generalization [57, 40]. We remove the top 1, 2, 5 or $10 \%$ most difficult examples from training to see the impact on downstream merging, the results are shown in Figure 5 (right). We see that the best merging results are achieved when the entire available data is used for training. Removing a fraction of the most difficult examples consistently yields lower merging performance, with more data removed leading to greater performance loss. This suggests that some amount of memorization of hard examples / uncommon features during fine-tuning is beneficial for merging. # 5 Aggressive early stopping during fine-tuning improves upcycling results We next examine the variability of optimal expert training time among different tasks. We find that upcycling can be further improved if the stopping time is optimized for a specific task, and propose a strategy on when to stop training. The learning rate scheduler we use in Section 3, i.e. linear warm-up followed by cosine decay, is a popular choice in the literature for training vision models, and has been extensively used in recent model merging papers. Both the warm-up and the decay phases are beneficial for performance since the former provides stability at the start of training while the latter helps convergence with smaller steps at the end of training. Therefore, our early stopping strategy uses a learning rate scheduler with warm-up and decay phases which can adapt to the varying training length induced by early stopping. Altogether, our proposed early stopping strategy uses a simple learning rate scheduler paired with an early stopping condition: a linear warm-up phase of a fixed number of steps followed by a “reduce learning rate on plateau” phase which gradually decreases the learning rate when a plateau is observed in the validation accuracy. Once the learning rate is decreased below a certain threshold, training is stopped. We fine-tune FFT and LoRA models on the 8 considered vision tasks. We use a fixed number of 50 steps for the linear warm-up, then we evaluate accuracy on a validation set every 5 training steps and multiply the learning rate by a factor of 0.5 when the validation accuracy has not improved for 3 consecutive validation rounds. The peak learning rates are 1e5 and 1e-4 for the FFT and LoRA models respectively. In Table 1, we report the merged model’s average accuracy across eight tasks. We compare the merging of early stopped experts to two baselines from Section 3: merging “overtrained” models (trained for 2048 steps) and merging the checkpoints that achieved the highest accuracy among all training durations. Table 1: Merging accuracy $( \% )$ for the overtrained, optimal and early stopped experts. Mean and standard deviation across 3 random seeds shown. Table 2: Early stopping MoErging results We see that the models trained using our simple taskdependent early stopped strategy yield merges that are better than those of overtrained models and as good, if not better, than the best merged experts obtained from a single stopping time, as presented in Sections 3.1 and 3.2. Early stopping seems to work especially well for LoRA adaptation, yield results on average better than the best ones from Section 3.2. We also use the early stopped LoRAs to initialize MoE layers, and then continue training in a multitask fashion, as described in Section 3.4. The results presented in Table 2 show that the MoErged models initialized with the early stop LoRAs achieve the same accuracy as the best LoRAs of all the different number of steps tried. # 6 Related work Model merging Combining multiple versions of a model into a single, more capable one has been a powerful technique in deep learning, and a very active research area [63]. We review some of the popular methods in appendix A. These merging methods often rely on the so-called linear mode connectivity [14, 45], i.e., minima in a deep learning model’s loss landscape that are connected by a low loss linear path in parameter space. Models that share a significant part of their training trajectories were found to be linearly mode connected [14, 36]. Therefore, it is generally assumed that different fine-tuned versions of the same pre-trained model are linearly mode connected and can be merged successfully. [45] goes beyond that, and explores merging of experts that were trained from different or poorly performing pre-trained models. However, little attention has been paid to how the expert fine-tuning procedure itself, specifically its duration, affect model merging performance. Model MoErging Model MoErging methods propose to re-use expert modules by combining them into mixture-of-experts (MoE) layers [47], with a router deciding which input, or part of an input, is processed by which expert module. Numerous model MoErging techniques have been proposed, with varying expert, router and application design choices [61, 20, 37, 34]. While existing surveys and methods focus on routing algorithms and module selection, none examine how expert overtraining influences downstream MoErging efficacy to our knowledge. Our setup for Section 3.4 is comparable to the one considered in [37] where LoRA experts are combined into MoE layers and the router is initialized using the Arrow method but we assume access to training data and continue training the experts and router. Expert training time While most model merging and MoErging papers do not discuss how the experts fine-tuning procedure might affect downstream upcycling performance, there are two notable exceptions. First, [68] show that the effectiveness of task vector based approaches is largely driven by the gradients of the first epoch, and therefore propose to alternate successive 1 epoch fine-tuning and merging steps. While they also seem to observe that less training can lead to better accuracy, they only test this for 1 epoch. We point out that, given the disparity in dataset sizes, 1 epoch of training on a large dataset might already bring an expert to an “overtrained” state while 1 epoch of training on a small dataset yields an undertrained model. Secondly, [38] observe representational incompatibilities when merging highly specialized experts but evaluate only two-model merges, and their proposed solution is to bypass merging altogether and keep the experts intact through MoErging. To our knowledge, we are the first to systematically link expert training duration to both downstream merging and MoErging outcomes, to analyze merging through the lens of example difficulty and to propose an early-stopping strategy that adapts to dataset heterogeneity. We note that the TIES Merging paper [62] uses early stopping for their NLP experiments to avoid expert overfitting, however the impact on merging is not studied. Analogous to our work, past papers have looked at how scaling pre-training affects downstream fine-tuning performance. A large scale study conducted on vision models [1] found that as pretraining accuracy improves, downstream fine-tuning performance saturates. More recently, [49] show that over-training LLMs during pre-training can harm performance after fine-tuning both on in-distribution and out-of-distribution tasks.
Modern deep learning is increasingly characterized by the use of open-weight foundation models that can be fine-tuned on specialized datasets. This has led to a proliferation of expert models and adapters, often shared via platforms like HuggingFace and AdapterHub. To leverage these resources, numerous model upcycling methods have emerged, enabling the reuse of fine-tuned models in multi-task systems. A natural pipeline has thus formed to harness the benefits of transfer learning and amortize sunk training costs: models are pre-trained on general data, fine-tuned on specific tasks, and then upcycled into more general-purpose systems. A prevailing assumption is that improvements at one stage of this pipeline propagate downstream, leading to gains at subsequent steps. In this work, we challenge that assumption by examining how expert fine-tuning affects model upcycling. We show that long fine-tuning of experts that optimizes for their individual performance leads to degraded merging performance, both for fully fine-tuned and LoRA-adapted models, and to worse downstream results when LoRA adapters are upcycled into MoE layers. We trace this degradation to the memorization of a small set of difficult examples that dominate late fine-tuning steps and are subsequently forgotten during merging. Finally, we demonstrate that a task-dependent aggressive early stopping strategy can significantly improve upcycling performance.
[ "cs.LG", "cs.AI" ]
# 1 Introduction Solving partial differential equations (PDEs) underpins a vast array of phenomena in engineering and the physical sciences, from fluid flow and heat transfer to fracture mechanics and structural deformation. Traditional numerical methods offer rigorous error bounds and adaptable frameworks, but they often incur substantial computational costs when applied to high-dimensional, nonlinear, or time-dependent problems [1]. This computational burden can become prohibitive in real-time control and optimization tasks, motivating the search for surrogate models that deliver rapid yet accurate PDE solutions. In recent years, deep neural network–based surrogates have emerged as a powerful alternative, demonstrating orders-ofmagnitude speedups over classical solvers while maintaining competitive accuracy [2, 3]. These data-driven models can learn solution operators from precomputed simulation data, enabling instantaneous inference once trained. Physicsinformed neural networks (PINNs) [4] introduced a paradigm shift by embedding the governing PDE residual directly into the loss function, thus bypassing the need for labeled solution data. While PINNs have been successfully applied to a wide range of forward and inverse problems, each new setting of initial conditions, boundary values, or forcing terms requires retraining from scratch, constraining their applicability to a single PDE configuration [5, 6]. Neural operators extend the concept of surrogate modeling by directly mappings infinite-dimensional input-output spaces, effectively learning solution operators for a family of PDEs. Foundational architectures such as DeepONet [7] and the Fourier Neural Operator (FNO) [8] show that a single model can generalize across varying PDE conditions and enable zero-shot super-resolution. Inspired by the success of the transformer architecture [9] in natural language processing and computer vision, recent works explored attention-based surrogate models to simulate physical systems. Typically, these models are trained on function samples defined over fixed discretization grids, which limits their ability to generalize across varying meshes [10, 11]. To address this, a new class of transformer-based neural operators has emerged, which enables super-resolution and discretization-invariant query of the output function [12–14]. They employ cross-attention to aggregate input features and predict outputs at arbitrary spatial/temporal coordinates, regardless of the underlying input grid. Despite these early successes, significant challenges remain in scaling transformer-based operators to realistic engineering applications. In particular, modeling systems with irregular geometries and non-uniform meshes demands more powerful mechanisms to capture complex interactions and dynamics among spatial nodes. To address these challenges, we propose a novel graph-informed transformer operator (GITO) architecture tailored for mesh-agnostic operator learning on arbitrary domains (Figure 1). Our framework comprises two core modules: a hybrid graph transformer (HGT) and a transformer neural operator (TNO). HGT marries graph neural networks (GNNs) for modeling intricate local spatial relations with transformer layers for long-range, global dependencies, interleaving message-passing and self-attention via a dedicated fusion layer to produce expressive relational embeddings. Building on these embeddings, TNO applies cross-attention for discretization-invariant querying of the output domain, followed by self-attention to capture dependencies among enriched query embeddings. Our main contributions are: 1) a novel graph-transformerbased neural operator architecture that seamlessly integrates local and global feature learning on irregular meshes and geometries, and 2) superior performance on benchmark PDE tasks, outperforming existing transformer-based neural operators. # 2 Related work Transformers as neural operators. The attention mechanism has shown promise at modeling both spatial correlations and temporal dynamics in physical systems. Spatial attention layers aggregate information across nonlocal points, capturing structural patterns and long-range dependencies within the domain [12, 13, 15, 16]. In the temporal setting, transformers learn state evolution over time without relying on recurrent architectures, often delegating spatial aggregation to other mechanisms such as GNNs [11, 14, 17]. In addition, recent work has focused on developing novel transformer architectures to improve the scalability and effectiveness of modeling complex physical systems [18–20]. Our method captures the spatial structures via linear-complexity attention mechanisms by leveraging the proposed HGT and TNO modules. Graphs as neural PDE solvers. GNNs have been explored as mesh-agnostic PDE solvers by representing spatial discretizations as graph vertices and leveraging message-passing to model local interactions [21, 22]. Previous studies have demonstrated that GNNs can effectively model diverse physical phenomena ranging from fluid dynamics and deformable materials [23] to global-scale weather forecasting [24]. Recently, transformer-inspired architectures have been applied to graph-based operator learning to more effectively handle arbitrary geometries and boundary conditions [16]. In parallel, latent-space compression via graph encodings has enabled efficient dynamics propagation and scalable temporal rollouts [11, 14]. # 3 Methodology # 3.1 Graph construction and feature encoding We represent both the input function and query points as separate graphs $\mathcal { G } = ( \nu , \mathcal { E } )$ , where each node $i \in \mathcal { V }$ corresponds to a spatial location (e.g., a mesh cell or a query point) and each edge $( i , j ) \in \mathcal { E }$ connects node $i$ either to its $k$ nearest neighbors or to nodes within a specified Euclidean radius. The value of $k$ and radius are considered as model hyperparameters (subsubsection 4.3.2). Each node feature vector $V _ { i }$ includes the spatial coordinates $\mathbf { x } _ { i }$ . For nodes corresponding to the input function, the observed field value $\mathbf { u } _ { i }$ is concatenated to the node features. Edge features $\mathbf { E } _ { i j }$ comprise relative displacements $\left( \mathbf { x } _ { i } - \mathbf { x } _ { j } \right)$ , Euclidean distances $\lvert \mathbf { x } _ { i } - \mathbf { x } _ { j } \rvert$ , and, in case of input function graphs, differences in solution values between connected nodes ${ \bf u } _ { i } - { \bf u } _ { j }$ [21]. Both node and edge features are passed through dedicated MLP-based encoders to generate initial embeddings, which are then fed into the HGT layers for subsequent representation learning. Input function observations Figure 1: Overall architecture of GITO. The input function and query points are first converted into graph representations and encoded via edge and node encoders. These encoded graphs are then processed by the hybrid graph transformer (HGT) module to learn informative relational features. The output representations from the HGT are used as key/value and query inputs to the transformer neural operator (TNO) module, which integrates contextual information from input function observations to enrich the query representations. Finally, an MLP decoder maps the query embeddings to real spatial coordinates. Figure 2: (Top) The hybrid graph transformer (HGT) module consists of a GNN layer, a self-attention global layer, and a self-attention fusion layer that jointly learn graph-based representations. (Bottom) The transformer neural operator (TNO) module employs cross-attention and self-attention mechanisms to integrate and process representations of input functions and query points. For clarity, standard components such as layer normalization, residual connections, and feed-forward networks are omitted. # 3.2 Hybrid graph transformer (HGT) module Despite their strengths, GNNs suffer from fundamental limitations due to sparse message passing, notably oversmoothing [25] and over-squashing [26]. Graph transformers (GTs) [27–29] address these shortcomings by allowing nodes to attend to all others in the graph; however, they often overlook edge features, hindering accurate representation learning. Hybrid architectures such as GPS Graph [30] and Exphormer [31] combine GNN and transformer layers to overcome these challenges: the GNN component captures local interactions and integrates edge information, while the transformer module models long-range and global dependencies and mitigates over-smoothing and over-squashing. Following this paradigm, we employ a GNN layer (GNN) alongside a linear self-attention module (GlobalAttn) to learn graph dynamics and introduce a fusion layer (Fusion) that applies self-attention to interleave local neighborhood aggregation with global attention, resulting in richer and more expressive graph representations (Figure 2). In the HGT module, node representations are updated by concatenating the outputs of the GNN and GlobalAttn layers, followed by processing the combined embedding through the Fusion layer: $$ \begin{array} { r } { V _ { G } , E = \mathtt { G N N } ( V , E ) \phantom { x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x } } \\ { V _ { T } = \mathtt { G l o b a l A t t u n } ( V ) \phantom { x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x } } \\ { \hat { V } = \mathtt { F u s i o n } \left( V _ { G } \oplus V _ { T } \right) . } \end{array} $$ The modularity of the hybrid graph transformer enables seamless integration of diverse GNN architectures and transformer modules, allowing the model to be tailored to specific application requirements. # 3.3 Transformer neural operator (TNO) module To empower zero-shot super-resolution and fully decouple input and output representations, we integrate a crossattention layer capable of querying the output domain at arbitrary spatial locations (Figure 2). This design parallels the branch and trunk networks in the DeepONet [22], seamlessly fusing input function embeddings with output queries to achieve discretization-invariant evaluation, regardless of the underlying input mesh [12]. The cross-attention layer takes as input the query embeddings and the input function representations generated by the HGT modules. The crossattention enriches the query embeddings with the information from the input functions. A subsequent self-attention module then captures interactions and dependencies among the enriched query points. Finally, an MLP decoder translates the resulting embeddings into the target physical output values. # 3.4 Model implementation details To efficiently learn operators for large-scale physical systems with numerous input and query locations, we adopt the linear-complexity attention mechanism proposed by Hao et al. [13]. Similar to Fourier and Galerkin attention mechanisms [10], this approach can capture complex dynamics while avoiding the quadratic computational cost of softmax-based attention. We adopt a “Norm-Attn-MLP-Norm” with residual connections for all attention layers. To handle cases with multiple input functions, we use a dedicated encoder for each input function. These encoded representations are then processed by the cross-attention module in TNO, specifically designed to handle multiple key-value (K/V) combinations, enabling efficient interaction across heterogeneous inputs. We incorporate a mixture of experts module following each attention mechanism. The gating network assigns weights to the experts based on the spatial location of the query points, effectively promoting a form of soft domain decomposition, which has been shown to enhance the learning of physical systems in prior work [13, 32]. In the HGT module, we use the Graph Attention Network (GATv2) [33] as the GNN layer and apply the same linear-complexity attention mechanism as in TNO for both the global and fusion layers. The graph construction strategies are detailed in subsubsection 4.3.2. In this work, we choose to use the HGT module only for query points for learning more expressive relational features. # Experimental results # 4.1 Datasets and baseline models Datasets. To evaluate GITO’s scalability on complex geometries, we test it on three challenging datasets: NavierStokes [13], Heat Conduction [13], and Airfoil [34]. A brief overview of the datasets is provided below, with detailed descriptions available in Appendix A: • 2D steady-state Navier-Stokes (NS): This dataset involves steady 2D fluid flow governed by Navier-Stokes equations in a rectangular domain with varying cavity positions (Figure 5.a). The goal is to predict velocity components $u , v$ , and pressure $p$ from the input mesh. • Multilayer 2D Heat Conduction (Heat): This dataset models heat conduction in composite media with multiple boundary shapes and spatially varying boundary conditions (Figure 5.b). The task is to predict temperature $T$ from multiple input functions. • Airfoil: This dataset involves the Mach number $M$ distribution over different airfoil shapes (Figure 6). The task is to predict $M$ from the input mesh and the geometry of the airfoil. Baseline Models. We benchmark our model against both conventional neural operator architectures, including FNO [8], Geo-FNO [34], and MIONet [35], as well as recently developed transformer-based operators, namely, GNOT [13], Galerkin Transformer (GKT) [10], and OFormer [12]. To ensure a fair comparison, we re-implement GNOT and evaluate it under the same experimental settings as our model, using a comparable or slightly larger number of parameters (Appendix A). For the NS and Heat datasets, we directly report the baseline performances from Hao et al. [13], while for the Airfoil dataset, we refer to Wu et al. [15]. # 4.2 Results Table 1 summarizes the mean relative $L ^ { 2 }$ error across all test datasets for the compared models, with lower values indicating higher prediction accuracy. The relative $L ^ { 2 }$ error is defined as $\frac { | \hat { y } - y | _ { 2 } } { | y | _ { 2 } }$ , where $\hat { y }$ is the model prediction and $y$ is the ground truth. This metric provides a normalized measure of prediction accuracy that is consistent across datasets with varying magnitudes. Detailed configurations and hyperparameter settings are provided in Appendix A. GITO consistently achieves the lowest error across all datasets and variables, outperforming existing neural operator baselines. This includes both conventional architectures such as FNO [8], Geo-FNO [34], and MIONet [35], as well as transformer-based models like GNOT [13], GKT [10], and OFormer [12]. To ensure a fair comparison, we re-trained GNOT using a reduced model size that matches GITO’s parameter count. Although FNO slightly outperforms GNOT for the $p$ variable on the NS dataset, GITO surpasses both, demonstrating superior generalization capabilities. For the Airfoil dataset, while the GKT model achieves the best baseline performance on this dataset, our proposed GITO model delivers more than $46 \%$ improvement in prediction accuracy over GKT, highlighting its effectiveness in modeling complex geometries. Table 1: Comparison of GITO with existing operator learning methods on the NS, Heat, and Airfoil datasets. The metric used for this comparison is relative $L ^ { \frac { \pi } { 2 } }$ error, with lower scores indicating better performance. The top first and second best results are highlighted. For the NS and Heat datasets, baseline results (except GNOT) are taken directly from Hao et al. [13], while for the Airfoil dataset, they are taken from $\mathrm { w } _ { \mathrm { u } }$ et al. [15]. For a fair comparison, we trained a smaller GNOT model to match GITO’s model size (see Appendix A for details.) Figures 3 and 4 further illustrate GITO’s generalization capability. In particular, Figure 3 presents a qualitative comparison of velocity components $( u , v )$ and pressure $( p )$ predicted by GITO against the ground truth for a sample from the NS dataset. The corresponding absolute error plots reveal the spatial distribution of prediction inaccuracies. Likewise, Figure 4 demonstrates GITO’s predictions of the Mach number field for a representative sample from the Airfoil dataset. The predicted field closely matches the ground truth, and the absolute error plot indicates minimal residual error. These findings demonstrate GITO’s generality and efficacy in handling both complex geometries (NS and Airfoil dataset) and multi-input settings (Heat dataset), establishing it as a versatile, high-performance surrogate for diverse scientific and engineering applications. # 4.3 Ablation studies Beyond overall performance, we conducted extensive ablation studies to assess the effect of key architectural component and design choices. # 4.3.1 Impact of fusion layer To demonstrate the effect of the fusion layer on model accuracy, we conducted experiments on the NS dataset using identical hyperparameters, except for the hidden size. In this experiment, instead of concatenating the outputs of the GNN and self-attention modules, we summed them and passed the result through an MLP (similar to GPS Graph [30]). Accordingly, the model without the fusion layer was trained with a hidden size of 192 (twice that of GITO) to match the dimensionality of the fused outputs. The number of model parameters and the $L ^ { 2 }$ relative error on the NS dataset are reported in Table 2. It is evident that the model without the fusion layer exhibits degraded accuracy, despite having a larger number of parameters. This clearly demonstrates the effectiveness of the fusion mechanism in enabling more expressive feature interactions between the GNN and self-attention outputs, as opposed to the limited representational capacity of simple element-wise summation. # 4.3.2 Effect of graph construction strategies We conduct an ablation study to evaluate the impact of graph construction methods on the accuracy and computational efficiency of the proposed GITO model. Specifically, we compare two widely used strategies: K-nearest neighbors (KNN) and radius-based (circular) graphs [16, 36]. In the KNN strategy, each node is connected to its $k$ nearest neighbors based on spatial proximity. In the circular strategy, nodes are connected to all other nodes within a fixed radius, forming edges only if the pairwise distance falls below the specified threshold. Figure 3: Comparison of GITO’s predictions against ground truth, and the corresponding absolute error plots for a test sample from the NS dataset: a) velocity component $u$ , b) velocity component $v$ , and c) pressure $p$ . Figure 4: Comparison between GITO’s predicted Mach number and the ground truth for a representative test sample from the Airfoil dataset. The left and middle panels show the ground truth and predicted fields, respectively, while the right panel presents the absolute error. Table 2: Ablation study comparing GITO with and without the fusion layer on the NS dataset. The fusion layer combines outputs from the GNN and self-attention paths. Reported values are relative $L ^ { 2 }$ errors; lower is better. NS Dataset. As shown in Table 3, increasing the number of neighbors in KNN (from 4 to 16) consistently reduces the relative $L ^ { 2 }$ error, indicating that denser local connectivity enables better modeling of spatial dependencies. However, this improvement comes at the cost of a significant increase in the number of graph edges (from 41k to 164k), which leads to higher memory usage and computational time during training and inference. On the other hand, the circular strategy with a radius of 0.0525 achieves the lowest error (3.91e-2) while maintaining a moderate edge count (16k). This suggests that, with careful tuning, the radius-based approach can capture only the most relevant local interactions and avoid unnecessary edges, providing a better balance between accuracy and efficiency. Larger radius, such as 0.067, may include irrelevant distant nodes, while smaller radius (e.g., 0.04) risk missing essential local connections—both resulting in slightly degraded performance. Table 3: Ablation study on the effect of graph construction strategies-KNN (with varying number of neighbors) and Circular (with varying radius)-on model accuracy for the NS dataset. The table reports the sum of relative $L ^ { 2 }$ errors across all three variables (lower is better). The number of edges is reported approximately in thousands. Heat Dataset. The trends differ for the Heat dataset (Table 4), which features more sparsely distributed query points and different material properties. Here, increasing the KNN count beyond 8 does not yield consistent improvements and, in fact, degrades performance. For instance, using 16 neighbors increases the error to 4.75e-2 compared to 4.61e-2 with 8 neighbors and $4 . 4 9 \mathrm { e } { - 2 }$ with only 4 neighbors. This is likely because higher KNN values may force connections to spatially distant and physically irrelevant nodes, misleading the model in heterogeneous material settings. Similarly, larger radius in the circular graph (e.g., 0.9) also lead to performance drops due to excessive inclusion of unrelated nodes. The best performance is observed with a small radius (0.25), which maintains sparse yet contextually meaningful connectivity. These results emphasize the importance of tailoring graph construction strategies to the underlying spatial structure and physical properties of the dataset. Airfoil dataset. For the Airfoil dataset, we used a KNN graph with 16 neighbors to ensure consistent connectivity across the large, sparsely discretized domain. Radius-based graphs either led to disconnected nodes in sparse regions or overly dense connections in clustered areas. KNN provided a balanced neighborhood structure, improving information flow and resulting in better prediction accuracy for the Mach field. Table 4: Ablation study on the effect of graph construction strategies on model performance for temperature prediction in the Heat dataset. Reported values are relative $L ^ { 2 }$ errors; lower is better. The number of edges is reported approximately in thousands. Overall, the ablation studies demonstrate that the choice of graph construction strategy significantly affects both the accuracy and computational efficiency of the model. While KNN provides a simple and adaptive structure, circular graphs—when carefully tuned—offer a more interpretable and controllable connectivity pattern. For datasets with dense spatial coverage (like NS), moderate-radius circular graphs are preferable, while for sparse or heterogeneous domains (like Heat), lower connectivity thresholds help prevent overfitting to irrelevant neighbors. Ultimately, the best graph construction strategy varies depending on the specific characteristics of the problem domain.
We present a novel graph-informed transformer operator (GITO) architecture for learning complex partial differential equation systems defined on irregular geometries and non-uniform meshes. GITO consists of two main modules: a hybrid graph transformer (HGT) and a transformer neural operator (TNO). HGT leverages a graph neural network (GNN) to encode local spatial relationships and a transformer to capture long-range dependencies. A self-attention fusion layer integrates the outputs of the GNN and transformer to enable more expressive feature learning on graph-structured data. TNO module employs linear-complexity cross-attention and self-attention layers to map encoded input functions to predictions at arbitrary query locations, ensuring discretization invariance and enabling zero-shot super-resolution across any mesh. Empirical results on benchmark PDE tasks demonstrate that GITO outperforms existing transformer-based neural operators, paving the way for efficient, mesh-agnostic surrogate solvers in engineering applications.
[ "cs.LG" ]
# I. INTRODUCTION Cloud-native applications are engineered to fully exploit modern cloud computing environments by adhering to scalability, elasticity, resilience, and continuous delivery principles. Built as collections of loosely coupled microservices, these applications are typically containerized and orchestrated using platforms like Kubernetes. Central to this ecosystem, Kubernetes has transformed how cloud-native applications are deployed and managed. By offering a declarative model for infrastructure through YAML manifests, Kubernetes simplifies many operational tasks while introducing its own layer of abstraction and complexity. As systems grow in size and heterogeneity, comprehending the architecture of Kubernetesmanaged applications becomes increasingly challenging. DevOps engineers use manual diagramming tools such as Draw.io or Lucidchart to visualize their cluster architectures. However, the fast-paced evolution of microservices and infrastructure in Kubernetes environments makes these static diagrams quickly outdated and error-prone. The gap between live system state and visual documentation often results in confusion, miscommunication, and reduced situational awareness. Moreover, manually navigating YAML manifests, Helm charts, or kubectl outputs across namespaces imposes a high cognitive load and hinders adequate system comprehension. In this paper, we introduce KubeDiagrams, an opensource tool that automates the generation of architecture diagrams directly from Kubernetes cluster states or declarative configurations. It bridges the gap between deployment reality and visual documentation, providing a lightweight yet powerful solution for system understanding. KubeDiagrams automates the generation of up-to-date, semantically rich architecture diagrams from live cluster states or declarative configurations such as YAML manifests, Helm charts, and Kustomize files. KubeDiagrams offers a scriptable, low-friction solution that integrates directly into DevOps workflows, continuously aligning system documentation with actual deployment states. It supports over 47 Kubernetes resource kinds (including custom resources) and provides semantic grouping, official iconography, and extensibility, ensuring that diagrams are accurate and cognitively accessible. Ultimately, KubeDiagrams enhance comprehension, reduce maintenance overhead, and bridge the gap between infrastructure-as-code and system understanding. A practitioner-centered analysis revealed strong adoption and appreciation of KubeDiagrams in real-world DevOps workflows. Feedback collected from blogs, social media, and technical posts highlights that users value the tool’s ability to automate architecture diagram generation, maintain upto-date documentation, and integrate seamlessly into CI/CD pipelines. Practitioners cited strengths include ease of use, support for infrastructure-as-code, and high visual clarity, reinforcing KubeDiagrams’ practical relevance. The remainder of this paper is structured as follows. Section II describes the architecture and functionality of the KubeDiagrams tool. Section III illustrates key features through three concrete use cases drawn from real-world scenarios. Section IV compares KubeDiagrams with existing visualization tools, highlighting differences in scope and implementation. Section V examines practitioner feedback collected from grey literature, offering insight into adoption and perceived value. Section VI outlines the main limitations of the current tool implementation. Finally, Section VII summarizes our findings and outlines directions for future work. II. THE KUBEDIAGRAMS TOOL KubeDiagrams is an open-source software visualization tool designed to generate architecture diagrams of Kubernetesbased systems. Developed as both a command-line utility and a Python library, the tool addresses a key challenge in cloud-native development: keeping architectural documentation accurate, up-to-date, and aligned with infrastructure-ascode (IaC) practices. KubeDiagrams automates the transformation of declarative system descriptions (such as Kubernetes manifests, Kustomize overlays, Helm charts, and helmfile descriptors) or live cluster state into clear, structured, and semantically meaningful diagrams. # A. Core Functionality and Features $f )$ Multiple Export Formats: KubeDiagrams supports output to PNG, JPG, GIF, TIFF, SVG, PDF, and GraphViz DOT, making it suitable for integration into a wide range of documentation and visualization workflows, from markdownbased internal wikis to external presentations and graph analytics tools. g) CI/CD and Automation-Friendly: The tool is lightweight and easily scriptable, available via pip or as a container image. Teams can embed it within CI/CD pipelines like GitHub actions to auto-generate diagrams and publish them to documentation portals, ensuring the system architecture is continuously synchronized with deployments. # B. Visual Semiotics KubeDiagrams supports a wide range of input formats and deployment scenarios. It can ingest YAML-based static configurations or connect to a running Kubernetes cluster via kubectl, providing support for both design-time and runtime documentation. The following features highlight its utility and robustness: a) Versatile Input Sources: Users can generate diagrams from: Live cluster state using kubectl get all -o yaml | kube-diagrams -o diagram.png - • Local YAML manifests representing individual or multiresource configurations Kustomize overlays and patched configurations Helm charts, including remote and OCI-based repositories via the helm-diagrams command • helmfile descriptions composing Helm charts, Kustomize overlays, and Kubernetes manifests together # b) Comprehensive Kubernetes Object Support: KubeDiagrams supports over 47 Kubernetes resource types including core objects (e.g., Pod, Deployment, Service), network policies, storage and RBAC resources, and Custom Resource Definitions (CRD). The tool ensures complete coverage of both standard and platform-extended workloads, enabling architectural diagrams that reflect production-grade environments. c) Semantic Grouping and Label-Based Clustering: Resources are organized hierarchically based on namespaces and labels (e.g., app, app.kubernetes.io/name, tier). This grouping improves scalability and comprehension by visually delineating application boundaries, services, and system responsibilities. d) Graph Semantics and Edge Typing: Relationships between resources are represented with meaningful edge styles. e) Official Kubernetes Iconography: To reduce cognitive overhead, the tool uses icons from the Kubernetes design language. These visuals improve diagram legibility and align with the mental models of platform engineers and DevOps practitioners familiar with Kubernetes. Fig. 1: KubeDiagrams visual semiotics. 1) Core Semiotics: As shown in Fig. 1, the KubeDiagrams visual semiotics is composed of: a) Clusters: A visual cluster contains other clusters, resources, and edges. There are two categories of visual clusters: • System clusters including – (1) Kubernetes cluster is the top-level cluster containing all the namespaces, logical clusters, and resources composing a running cloud-native system. – (2) Namespace cluster represents a Namespace resource and all its owned/namespaced resources. Logical clusters including – (3a) K8s instance cluster contains all the Helm charts, applications, components, resources composing a cloud-native system instance. – (3b) Helm chart cluster contains all the applications, components, resources packaged in a same Helm chart. – (3c) K8s application cluster contains all the components and resources forming a cloud-native application coherently. – (3d) K8s component cluster contains a set of resources forming a coherent part of a cloud-native application. b) Nodes: A node is the visual representation of a Kubernetes resource. Its upper part is a visual icon representing the kind of the resource such as Job, Service, Pod, ConfigMap, NetworkAttachmentDefinition. Its lower part is the name of the resource. There are two categories of visual nodes: (4) Built-in resource is provided by Kubernetes clusters natively. Its icon respects the iconography defined by the Kubernetes community1. (5) Custom resource is not provided by Kubernetes clusters natively such as NetworkAttachmentDefinition, Certificate, etc. It requires deploying a custom operator, which defines the structure of the custom resource via a CRD and implements its dynamic behavior via dedicated controllers. The icon of a custom resource is freely defined by operator providers or end-users. c) Edges: An edge is the visual representation of a relation between two Kubernetes resources. There are three categories of visual edges: • (6) explicit object reference (black solid line), e.g., from Pod to ConfigMap (7) label-based selector (black dashed line), from Service to Pod (8) owner/controller (black dotted line), e.g., Pod owned by Job Thereby, the KubeDiagrams’ visual semiotics is very simple (3 meta-concepts and 8 instantiations) and easily understandable by any Kubernetes practitioner. Moreover, this semiotics is customizable and extensible as illustrated later in Section III. 2) Supported Resource Types and Icons: KubeDiagrams supports visualization of core and extended Kubernetes resources including but not limited to: • Workloads: Pod, Deployment, StatefulSet, DaemonSet, Job, CronJob, ReplicationController, PodTemplate Configuration: ConfigMap, Secret 1https://github.com/kubernetes/community/tree/master/icons Scaling: HorizontalPodAutoscaler, VerticalPodAutoscaler Policies: LimitRange, PodDisruptionBudget, PodSecurityPolicy, ResourceQuota Network: Service, Endpoints, EndpointSlice, Ingress, IngressClass, NetworkPolicy, NetworkAttachmentDefinition Storage: PersistentVolume, PersistentVolumeClaim, StorageClass, VolumeAttachment, CSINode, CSIDriver, CSIStorageCapacity RBAC: ServiceAccount, Role, RoleBinding, ClusterRole, ClusterRoleBinding, User, Group • Control Plane: Node, PriorityClass, RuntimeClass, APIService Custom resources: CustomResourceDefinition, ValidatingWebhookConfiguration, MutatingWebhookConfiguration Mappings between these resource kinds and visual elements (nodes, edges, clusters) are defined in an internal configuration file and can be extended through users’ custom configurations. 3) Extensibility and Customization: KubeDiagrams can be customized through external configuration files written in YAML. Users can define: Custom visual mappings for CRDs or extended resource types Logical clusters to group heterogeneous resources (e.g., external services, legacy systems) • Additional node and edge types for domain-specific semantics This extensibility enables adaptation to varied deployment topologies and supports domain-specific visualizations beyond the default Kubernetes model. # C. Installation and Integration Installation is straightforward: pip install KubeDiagrams or via Docker: docker run -v "\$(pwd)":/work \ philippemerle/KubeDiagrams kube-diagrams -o output.png input.yaml These options enable flexible integration into local development environments or automated workflows. For example, documentation repositories can be configured to regenerate diagrams on every Git push or cluster deployment. # D. Usage Scenarios The tool has been applied in a variety of real-world software engineering contexts: Continuous Architecture Documentation: Ensures that architecture diagrams reflect the current system state in CI/CD pipelines. Multi-Environment Comparison: Highlights configuration drift across environments (e.g., dev, staging, production). System Comprehension and Onboarding: Facilitates faster onboarding and better system understanding through visual explanations. Operational Analysis and Debugging: Diagrams generated during incident response can help identify dependencies and potential points of failure. • Architecture Review and Compliance Audits: Offers visual artifacts for validating system design and audit reporting. # E. Implementation Fig. 2 illustrates the high-level software architecture and processing pipeline of KubeDiagrams. At the input stage, Kubernetes resources can be sourced from either helmfile descriptors (via the helmfile command) or Helm charts (via the helm command) or kustomization files (via the kustomize command) or directly from live clusters (via the kubectl command). These sources produce Kubernetes manifests in YAML format, which serve as the primary input for the tool. KubeDiagrams, implemented in Python $3 . 9 +$ , parses these manifests using the PyYAML library to extract structured data. Next, KubeDiagrams transforms this data into a visual representation. The tool is configurable through both built-in and custom configuration files, allowing users to customize visual mappings, such as icons, labels, and groupings, including support for custom resources (CRD). Once the internal representation is constructed, KubeDiagrams outputs a .dot file, the standard format used by Graphviz [1] for describing graphs. This file is then processed by Graphviz’s dot utility to generate visual outputs in multiple formats, including PNG, JPG, GIF, TIFF, SVG, PDF, to name a few. Overall, KubeDiagrams acts as a pipeline that transforms declarative infrastructure-as-code (IaC) into expressive, customizable architecture diagrams, supporting automation, extensibility, and multiple output formats suitable for documentation or presentation. Fig. 2: KubeDiagrams software architecture. Fig. 4 shows a generated diagram representing a WordPress instance deployed on a Kubernetes cluster represented by the encompassing blue container. The deployment KubeDiagrams is distributed as a Python package2, a container image for running KubeDiagrams inside a container3, a Nix flake for reproducible builds4, and a GitHub action for generating diagrams at CI/CD time5. # F. Availability KubeDiagrams is available under the GPL-3.0 license and can be accessed via GitHub at: https://github.com/philippemerle/KubeDiagrams Its open-source nature, low installation barrier, and strong alignment with DevOps workflows make it a valuable addition to the software visualization toolbox for modern Kubernetesbased systems. # III. USE CASES This section illustrates three use cases of KubeDiagrams. KubeDiagrams could be applied to 1) any Kubernetesbased business application, e.g., the well-known WordPress publishing platform, 2) any Kubernetes operator implementing controllers for custom resources, e.g., the well-known cert-manager operator, and 3) any Kubernetes control plane implementation, e.g., the well-known minikube one. # A. WordPress Publishing Platform One of the official Kubernetes tutorials6 is based on WordPress, an open source publishing platform used by millions of websites worldwide. Fig. 3 shows the diagram generated by KubeDiagrams automatically from the tutorial’s manifests. As shown by the diagram, this use case is composed of two deployment workloads: wordpress encapsulates the publishing platform and wordpress-mysql encapsulates a MySQL database manager. Each workload is exposed by its own service (svc) and has its own persistent volume claim (pvc). A secret containing the password to access the database is shared between both workloads. Dashed black arrows represent selectors from services to workloads and solid black arrows represent explicit references between resources. All these resources are in the default namespace represented by the dashed black container. As they are all labelled with the same label (app: wordpress) then all of them are grouped in the light purple container. The mapping between the label app and the visual container attributes (title and bgcolor) is defined in the following KubeDiagrams custom configuration: clusters: - label: app title: "Application: {}" bgcolor: "#ECE8F6" 2https://pypi.org/project/KubeDiagrams 3https://hub.docker.com/r/philippemerle/kubediagrams 4https://github.com/philippemerle/KubeDiagrams/blob/main/flake.nix 5https://github.com/philippemerle/KubeDiagrams/blob/main/action.yml 6https://kubernetes.io/docs/tutorials/stateful-application Fig. 3: Generated diagram for WordPress manifests. of both WordPress workloads instantiates new resources compared to Fig. 3, i.e., each deployment owns a replica set $( \boldsymbol { \mathtt { r s } } )$ managing pods, the default namespace owns a service account (sa) and a config map (cm) containing Kubernetes credentials, and each persistent volume claim owns a persistent volume (pv) of the standard storage class (sc). This use case shows that developers could use KubeDiagrams to visualize their business cloud-native applications (here WordPress) at both development (Fig. 3) and deployment (Fig. 4) times with minimal cognitive efforts. # B. cert-manager Operator cert-manager7 is the world-wide leading cloud native X.509 certificate management. Concretely, cert-manager is a Kubernetes operator controlling both Issuer and Certificate custom resources. As declared in Listing 1, a certificate is managed by a certificate authority called issuer (Lines 13-14) and is stored with its signed private key in a dynamically created secret (Line 15). cert-manager deals with a variety of certificate authorities, as the self-signed issuer declared in Line 6. Fig. 4: Generated diagram for a deployed WordPress instance. Fig. 5: Visual representation of Issuer and Certificate custom resources. S selfsigned -issuer A bl 爸 secret apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: 6 selfSigned: {} 8 apiVersion: cert-manager.io/v1 9 kind: Certificate 10 metadata: 11 name: serving-cert 12 spec: 13 issuerRef: 14 name: selfsigned-issuer 15 secretName: serving-cert Listing 1: Declaration of Issuer and Certificate custom resources. KubeDiagrams allows one to associate a visual representation to any Kubernetes custom resource as illustrated in Fig. 5. The mapping between a custom resource type and its visual representation is declaratively defined as shown in Listing 2. Custom resource types are identified by the concatenation of their kind and apiVersion (Lines 2 and 5). The scope of a custom resource type is either Namespaced or Cluster (Lines 3 and 6). A visual icon is associated to each custom resource type (Lines 4 and 7). A custom resource could dynamically create other resource as illustrated in Lines 8-11. The reference and selector fields of a custom resource (Lines 13 and 20) are mapped to visual edge attributes (Lines 16-19 and 21-24). Fig. 6: Generated diagram for cert-manager Helm Chart. nodes: 2 Issuer/cert-manager.io/v1: 3 scope: Namespaced 4 custom_icon: issuer.png 5 Certificate/cert-manager.io/v1: 6 scope: Namespaced 7 custom_icon: certificate.png 8 nodes: 9 spec.secretName: 10 kind: Secret 11 apiVersion: v1 12 edges: 13 spec.issuerRef.name: 14 kind: Issuer 15 apiVersion: cert-manager.io/v1 16 graph_attr: 17 color: black 18 style: solid 19 direction: up 20 spec.secretName: 21 graph_attr: 22 xlabel: create 23 color: black 24 style: dotted Listing 2: KubeDiagrams custom configuration for cert-manager. Fig. 6 shows the generated diagram for the Helm chart packaging cert-manager8. Let us note that this generated diagram clearly summarizes key architectural aspects encoded in around 1.444 lines of YAML manifests. The colored containers represent the hierarchical structure of cert-manager including its Helm charts (here just one), applications, components, and resources. Let us note that most of its resources are dedicated to the configuration of Kubernetes role-based access control policies: roles (role and c.role), role bindings (rb and crb), and service accounts (sa). Two resources (vwc and mwc) extend the Kubernetes API server in order to validate and mutate Certificate and Issuer custom resources. Finally, four workload resources (deploy and job) encapsulate the functional code of cert-manager. This use case shows how KubeDiagrams helps to visualize any cloud-native operator (here cert-manager) and its controlled custom resources (here Certificate and Issuer) with few cognitive efforts. # C. minikube control plane minikube9 quickly sets up a local Kubernetes cluster on macOS, Linux, Windows, and it is mainly used by developers rather than in production. As shown in Fig. 7, the kube-system namespace contains 1) the control-plane and its kube-apiserver, etcd, kube-scheduler, and kube-controller-manager components, 2) a network communication service (kube-proxy), 3) a local DNS service (kube-dns), 4) a local storage service, 5) a metrics service (metrics-server), and 6) bootstrapping role-based access control policies. Fig. 7: Generated diagram for minikube control plane. The three previous use cases show that KubeDiagrams is helpful to visualize, document, and understand all the layers of any cloud-native system including the Kubernetes control plane (e.g., minikube), custom controllers (e.g., cert-manager), and business applications (e.g., WordPress). The KubeDiagrams’s public repository contains many other use cases10 such as the $\mathtt { k 0 s }$ lightweight Kubernetes control plane for embedded systems, several 5G core network functions, various custom controllers, and business applications. # IV. TOOLS’ COMPARATIVE ANALYSIS The Kubernetes visualization tools landscape demonstrates considerable variation in scope, activity, and adoption. KubeDiagrams leads in terms of Kubernetes resource coverage, supporting 47 distinct Kubernetes resource kinds—more than four times that of many tools. Its development also remains current, with the last commit dated May 2025 and eight contributors maintaining the project. The high engagement correlates with strong popularity, as evidenced by 803 GitHub stars. KubeView [2], while older—created in February 2019—still maintains the largest user base, with 993 stars. Despite its last commit occurring in 2022, the tool has garnered attention for its stability and mature features. However, its support covers only 10 resource kinds, which limits its use for more complex cloud-native systems. In contrast, newer tools like K8s Diagram Architecture Generator [7] and KubeDraw [15] emerged in 2024. Both show recent activity and initial community interest, but their GitHub stars remain modest (20 and 0 respectively). They each list only one or two contributors, which may pose risks for longterm maintenance. Projects like k8sviz [4] and Kubernetes diagrams [5] sit in the middle ground. They offer moderate resource kind support (12 and 8 respectively), have received updates within the last few years, and retain small but active contributor bases. These tools may serve well for focused use cases or educational purposes but lack the broader ecosystem coverage seen in KubeDiagrams. Several other tools, including $\mathbf { k } 8 \mathbf { s }$ -diagrams [10], kube-diagram [11], and k8d [13], have not seen updates since their creation, and each has only a single contributor. These attributes suggest limited ongoing support and a lower likelihood of future enhancements. The Kubernetes visualization ecosystem significantly varies in implementation strategies, input handling, and output capabilities. Among the surveyed tools (Table II), only KubeDiagrams supports the full range of input sources—including raw manifests, Helm charts, Kustomize files, and live cluster state. Most other tools limit input to the Kubernetes API or require annotations, reducing flexibility in Infrastructure-as-Code workflows. Tools such as k8sviz [4], Kubernetes diagrams [5], and k8s-diagrams [10] consume live cluster data but offer fewer customization options. Regarding implementation, Go and Python dominate, with several tools leveraging the Diagrams library for static rendering. However, only a few tools are exported to multiple formats. KubeDiagrams supports seven output types (PNG, JPG, GIF, TIFF, SVG, PDF, DOT), making it more suitable for integrating diverse documentation workflows. Other tools like GruCloud [6] or k8d [13] restrict output to single-purpose formats such as PlantUML or draw.io XML, which may limit reuse. Some tools, including react- $\mathbf { \nabla } \cdot \ k 8 \mathbf { s } .$ -viewer [8], focus on web-based rendering but sacrifice portability in the documentation. Only KubeDiagrams combines wide Kubernetes kind coverage (47), extensive input compatibility, and broad output format support. These attributes suggest it fills a unique position in the tooling landscape, balancing completeness, automation, and integration. To contextualize the relevance and adoption of KubeDiagrams within the broader ecosystem of Kubernetes visualization tools, we conducted a comparative analysis using GitHub star history as a proxy for community interest and adoption. Fig. 8 illustrates the evolution of GitHub stars over time for fifteen representative projects. The most prominent project in terms of long-term popularity is ben-cuk/kubeview, which has shown steady growth since 2019 and near 1000 GitHub stars by early 2025. It suggests sustained interest due to early market entry and consistent maintenance. In contrast, KubeDiagrams has demonstrated exceptional recent growth, rapidly accumulating 803 stars in less than six months. This steep curve indicates an accelerating adoption trend, positioning KubeDiagrams as a rising contender and potentially the most rapidly growing tool in this space. Furthermore, mkimuram/k8sviz and trois-six/ $\mathtt { k 8 s }$ -diagrams represent stable, moderately adopted alternatives, both steadily accumulating 309 and 143 stars, respectively. Their curves indicate slow but consistent community engagement, likely due to mature feature sets but less recent innovation. Several other tools show niche or stagnant adoption trajectories, such as grucloud/grucloud, kocierik/k8s-to-diagram, and SocialGouv/react-k8s-viewer display modest growth under 200 stars, with nearly flat trends since 2022. TABLE I: Comparison of Kubernetes Visualization Tools - Activity and Popularity TABLE II: Comparison of Kubernetes Visualization Tools - Features aKubernetes Icons Set (KIS) Overall, Fig. 8 reveals a bifurcation in the Kubernetes visualization ecosystem. Tools like kubeview, k8sviz, and k8s-diagrams reflect steady-state maturity (2 years without contributions), while KubeDiagrams exemplifies a rapid adoption in an active tool project. KubeDiagrams appears the most actively developed and comprehensive option, while KubeView retains a lead in community adoption. Other tools fill niche roles or remain dormant. Researchers and practitioners should weigh functionality, development activity, and contributor engagement when selecting a visualization tool for Kubernetes. Up-to-date Table I, Table II, and Fig. 8 data are available online11. # V. PRACTITIONERS’ PERSPECTIVE To understand how practitioners perceive KubeDiagrams in real-world settings, we systematically searched for unprompted user feedback across several public platforms, Star History ■benc-uk/kubeview ■ philippemerle/KubeDiagrams mkimuram/k8sviz 800 trois-six/k8s-diagrams grucloud/grucloud kocierik/k8s-to-diagram C BOnam/kubedraw 200 2020 2021 2022 2023 2024 2025 Date including Reddit, Twitter, Medium, and personal technical blogs12. Our goal was not to rely solely on curated testimonials or official documentation, but rather to gather authentic, selfinitiated observations from software engineers and DevOps professionals who have used the tool in practice. This exploratory search yielded 6 distinct sources containing explicit commentary on KubeDiagrams. These documents included blog posts detailing integration experiences, Reddit threads discussing usability trade-offs, and social media posts highlighting strengths and limitations. We treated each artifact as a data point reflecting spontaneous practitioner engagement. Through close reading and thematic analysis, we extracted common patterns, concerns, and endorsements from these sources. These insights allowed us to identify the tool’s position within current DevOps workflows and how its perceived value compares to other visualization alternatives in the Kubernetes ecosystem. Abhimanyu Saharan, a DevOps engineer, documents in a blog post “Generate Kubernetes Architecture Maps Directly from Your Cluster” [16] his adoption of KubeDiagrams to address persistent challenges with maintaining accurate Kubernetes architecture documentation. He identifies a recurring issue in production environments: architecture diagrams often fall out of sync with actual deployments, creating confusion during onboarding, troubleshooting, and audits. Traditional diagramming tools, such as Lucidchart and Draw.io, require manual updates and do not scale with the dynamic nature of Kubernetes. Saharan integrates KubeDiagrams into his workflow to automate this task. He highlights the tool’s ability to generate up-to-date architecture diagrams directly from a live Kubernetes cluster or manifests, Helm charts, and Kustomize configurations. KubeDiagrams’ CLI and support for a wide range of Kubernetes resource types (including CRDs) make it immediately valuable for operational and pre-deployment contexts. The blog emphasizes specific benefits observed in practice: improved environment consistency across dev, staging, and prod; clearer onboarding through live diagrams; and faster incident response via visual system maps. Saharan also demonstrates how KubeDiagrams fits naturally into CI/CD pipelines, enabling continuous documentation generation with minimal configuration. He concludes that KubeDiagrams eliminates the lag between system state and system documentation, replacing guesswork with precise, always-current diagrams. Saharan explicitly states that : “It solved the ‘outdated diagram’ problem in one swoop by always reflecting the live environment.”; “By combining real-time data with smart grouping and rich support for Kubernetes resources, KubeDiagrams delivers diagrams that are both accurate and instantly informative.”. A practitioner blog post by Mr.PlanB [17] highlights the usability and practical value of KubeDiagrams in everyday DevOps tasks. The author describes the challenge of understanding Kubernetes architectures scattered across YAML files, Helm charts, and multiple namespaces. KubeDiagrams addresses this problem by producing clean architecture diagrams directly from live clusters or configuration files using simple command-line invocations. The author emphasizes the tool’s minimal setup (no UI, no configuration overhead) and notes that it integrates well into pipelines and automation scripts. The post underlines a recurring theme: the tool reduces cognitive and operational overhead without overcomplicating the workflow. The blogger explicitly states that : “You run a command, and out pops a PNG showing how everything connects. Simple.”; “It’s also helping folks troubleshoot and document their setups more easily.”; “For anyone who’s spent too long trying to draw a Kubernetes diagram manually, that’s a pretty big win.”. A technical blog by dbafromthecold [18] demonstrates the application of KubeDiagrams to a Kubernetes deployment of Microsoft SQL Server using StatefulSets and persistent volumes. The author emphasizes the complexity of visualizing stateful workloads and highlights the difficulty of maintaining clear documentation across multiple YAML files. Dbafromthecold argues that KubeDiagrams addresses this challenge by generating accurate architecture diagrams directly from manifest files, allowing practitioners to visualize relationships between services, volumes, StatefulSets, and namespaces. The author concludes that KubeDiagrams significantly simplifies the task of documenting complex Kubernetes environments and provides immediate value to practitioners managing stateful applications. The blog author explicitly states that : “So having the ability to easily generate diagrams is really helpful. . . because we all should be documenting everything, right?”; “It works really well and is a great way to visualise objects in Kubernetes.” TABLE III: Practitioner Statements about KubeDiagrams by theme and source Fig. 9: Frequency of practitioner themes related to KubeDiagrams across multiple sources. Table III and Fig. 9 highlight recurring themes in practitioner feedback about KubeDiagrams from the six sources, drawn from multiple blog posts, community threads, and informal reviews. The most prominent theme is automation of diagram generation (10 mentions), which reflects a widespread desire to replace manual, error-prone documentation with tools that generate architecture views directly from live Kubernetes state or configuration artifacts. Closely following are documentation maintenance, usability and simplicity, each mentioned 8 times, underscoring the tool’s appeal as both a practical aid for maintaining up-todate system knowledge and a lightweight addition to existing workflows. Themes such as community appreciation (7 mentions) and visual quality and readability (5 mentions) confirm a positive reception regarding the aesthetics and impact of the diagrams, especially when compared to traditional tools like Lucidchart. Less frequent but still significant are mentions of clusterwide and IaC support, integration into workflows (e.g., GitHub Actions, serverless deployments), and feature requests (e.g., HTML outputs, filtering ReplicaSets), indicating areas where users see potential for future development. Overall, the distribution of feedback confirms that KubeDiagrams addresses concrete pain points in Kubernetes-based DevOps practices and is valued for its automation, clarity, and low-friction integration into realworld tooling environments. # VI. LIMITATIONS Although KubeDiagrams supports a wide range of Kubernetes resource types (including CRDs), it does not visualize ephemeral system components such as Events, TokenReview, or real-time metrics. Furthermore, observability and runtime introspection remain outside its scope. KubeDiagrams currently prioritizes static visualization and automation over interactivity. The tool generates architecture diagrams as static images or DOT files, which limits user exploration during runtime analysis. Engineers who require zoomable, clickable, or searchable diagrams must integrate KubeDiagrams outputs into external visualization platforms or manually post-process the results. The tool also depends heavily on Kubernetes label conventions to infer application structure. When users omit or inconsistently apply labels such as app, tier, or component, the resulting diagrams may flatten logical groupings or misrepresent boundaries between services. While configuration files allow partial control over layout, KubeDiagrams does not validate semantic intent beyond syntactic matching. The rendering engine relies on Graphviz for layout and output generation. This decision simplifies integration but constrains layout flexibility, especially for very large or deeply nested deployments. Developers cannot fine-tune spacing, alignment, or edge routing through the command line, which may frustrate users who expect WYSIWYG control or webbased refinement. Finally, the project has not undergone a formal usability study. The authors collected community feedback from blogs and social media, but did not conduct structured evaluations with real-world teams. As a result, the extent to which KubeDiagrams improves onboarding speed, debugging accuracy, or architectural decision-making remains an open question.
Modern distributed applications increasingly rely on cloud-native platforms to abstract the complexity of deployment and scalability. As the de facto orchestration standard, Kubernetes enables this abstraction, but its declarative configuration model makes the architectural understanding difficult. Developers, operators, and architects struggle to form accurate mental models from raw manifests, Helm charts, or cluster state descriptions. We introduce KubeDiagrams, an open-source tool that transforms Kubernetes manifests into architecture diagrams. By grounding our design in a user-centered study of real-world visualization practices, we identify the specific challenges Kubernetes users face and map these to concrete design requirements. KubeDiagrams integrates seamlessly with standard Kubernetes artifacts, preserves semantic fidelity to core concepts, and supports extensibility and automation. We detail the tool's architecture, visual encoding strategies, and extensibility mechanisms. Three case studies illustrate how KubeDiagrams enhances system comprehension and supports architectural reasoning in distributed cloud-native systems. KubeDiagrams addresses concrete pain points in Kubernetes-based DevOps practices and is valued for its automation, clarity, and low-friction integration into real-world tooling environments.
[ "cs.SE", "cs.DC" ]
# 1 Introduction Most research in Data Science focuses on technical resources, overlooking project organization and management. Many Data Science projects fail or fall short of delivering expected value, with 82% of teams lacking a process model or methodology [1]. Cross Industry Standard Process for Data Mining (CRISP-DM) is a widely used process model in Data Science [2]. It is technology-independent, adaptable across industries, and defines key steps for data science projects [3]. However, it lacks predictability and does not follow agile principles and practices.[4,5]. Data science models are integrated into lines of code, and applying software engineering practices can enhance organization and efficiency in the development and maintenance of that code [6]. Agility is a concept that has been modestly explored in Data Science, presenting an opportunity to integrate the eXtreme Programming (XP) method with CRISP-DM [7]. We conducted an empirical study in a real organizational context. To represent the field of Data Science, we used CRISP-DM as a reference for the process. To explore agility, we adopted the agile development method XP. The guiding question for this study is: RQ1: How can the agility of the XP method be integrated with CRISP-DM in Data Science projects? # 1.1 CRISP-DM CRISP-DM is a structured process model used to guide data mining, data science, and analytics projects[2]. It consists of six phases[8,3]: – Business Understanding: focuses on defining project objectives and business requirements, translating them into a data mining problem. – Data Understanding: focuses on collecting and exploring the data to identify patterns, issues, and derive initial insights. – Data Preparation: focuses on cleaning, selecting, and transforming the data to make it suitable for modeling. – Modeling: focuses on selecting and applying appropriate modeling techniques, refining the model as necessary. – Evaluation: focuses on assessing the model to ensure it meets business objectives and reviewing the steps taken. – Deployment: focuses on implementing the model in the business environment, using systems, software, reports, or dashboards, with continuous monitoring to ensure alignment with business goals. # 1.2 eXtreme Programming Brief description of the XP practices used in this study: – User Stories: simple, short descriptions of features focused on user needs that guide development and help prioritize work [9,10]. – Planning and Releases: high-level plan covering work to be done each quarter, aligning goals and deliverables [9,11]. – Iterations: short development cycles, usually 1 to 4 weeks, during which features are planned, built, tested, and small releases are made for continuous feedback[9,11]. – Slack: buffer time to handle unexpected issues, allowing the team to maintain quality and avoid overload [11]. – Whole Team: a multidisciplinary team with all the skills necessary for the project [11,12]. – Informative Workspace: a visual, organized workspace with clear, accessible information on project progress, promoting communication and transparency [11,12]. – Sustainable Pace: a steady, healthy work rhythm without overloading, ensuring long-term productivity and quality [11,12]. – Pair Programming: two developers work together at one station, reviewing and improving code simultaneously, enhancing quality and collaboration [11]. – Continuous Integration: frequent, automated code integration to ensure the software is always functional and minimize errors [13]. – Test-First Development: writing automated tests before code to ensure requirements are met and code quality is high. This creates an efficient cycle of testing, coding, and refactoring [11]. – Incremental Design: system design done incrementally and iteratively, starting with simple solutions that are gradually refined, making future changes easier [11,14]. – Spikes: quick exploration techniques to solve technical or design problems, evaluate potential solutions, and discard unfeasible ones [12]. # 2 Method In this study, We conducted a case study of Elo7, an e-commerce company founded in 2008 and a market leader in Brazil, specializing in handmade products, operating a platform with over 9 million items, and having an agile data science team. An intrinsic case study seeks to deeply understand an individual case in its specific aspects [16]. Furthermore, a revelatory case allows the researcher to examine a phenomenon previously underexplored in scientific research [15]. Additionally, no relevant studies were found that address the combined use of CRISP-DM and XP [7]. Given that few companies adopt both methods, our objective was to understand a specific instance of the adoption of CRISP-DM and XP in Data Science at the company Elo7. Figure 1 illustrates the case study process as applied in this study. Fig. 1. Case study methodology # 2.1 Planning We defined the scope of the case study. We applied the convergence of evidence through interviews and questionnaires. This approach strengthens construct validity in case studies. Using multiple sources of evidence allows for diverse assessments of the phenomenon, improving the accuracy of the event presentation [15]. # 2.2 Design and Preparation Interview: The open interview is used to explore a topic in depth, allowing the interviewee to freely discuss the proposed theme [17].To understand the company and its processes in data science projects in an exploratory manner, we decided to conduct a qualitative, open-ended, and unstructured, which provided input for the Survey. We prepared the open interview to be conducted remotely (via telepresence) with the data leadership of the company, focusing on the specific topic of Agility in Data Science. We divided the interview into a discussion on the general topic of agility in data science, followed by an exploration of the aspects of CRISP-DM and XP. Questionnaire: To guide the evidence collection through questionnaires, three main aspects were analyzed: (1) the application of the CRISP-DM stages by the data science team, (2) the adoption of XP practices within the team, and (3) the specific XP practices used at each stage of the CRISP-DM framework in the company. Next, we structured the survey with closed questions that utilize a 5-point Likert scale, which enables precise and standardized data collection that respondents can easily understand. The response options are: Never, Rarely, Occasionally, Frequently, and Always or More frequently. To collect data on the application of the CRISP-DM stages by the data science team, we designed a questionnaire with 20 questions divided across the six phases of CRISP-DM: Business Understanding, Data Understanding, Data Preparation, Modeling, Evaluation, and Deployment. These questions cover data science activities, ranging from business vision and strategic alignment to model deployment, including technical tasks such as data preparation and modeling. To assess the implementation of XP practices within the team, a questionnaire was designed with 13 questions covering the following XP practices: User Stories, Releases, Iterations, Slack, Whole Team, Informative Workspace, Sustainable Pace, Pair Programming, Continuous Integration, Test-Driven Development, Incremental Design, and Spikes. Both the CRISP-DM and XP questions included an optional open-ended field to collect additional information. Finally, to assess XP practices across CRISPDM stages, participants linked each XP practice to one or more CRISP-DM phases. The entire questionnaire can be found in the supplementary material (https://.....). # 2.3 Data Collection Interview: On November 22, 2022, we held a meeting with the lead data scientist at Elo7 from 5:00 p.m. to 6:45 p.m. via Google Meet. We presented the CRISP-DM process model, and the data scientist confirmed that all stages and tasks in the model aligned with the practices at Elo7. She also reported that the company applied the agile XP method and executed Data Science activities in iterative cycles. She emphasized the use of the Spikes practice, which the team used to solve technical challenges and gain a deeper understanding of the problem and business objectives. Additionally, the company routinely conducted both product and process reviews. Finally, Elo7 deployed models through software or applications and maintained a model training platform, which facilitated the maintenance and improvement of deployed models. Questionnaire: Between January 16 and 20, 2023, we conducted a pilot with three Data Science professionals from Elo7 to validate the format and clarity of the questions. Based on the feedback, we made the following adjustments: the inclusion of a figure depicting the phases of CRISP-DM, an improvement in the description of XP practices, and the replacement of the Likert scale of agreement with a frequency scale. Subsequently, we distributed the questionnaire to all professionals involved in Data Science projects at Elo7. It was available from February 6 to 23, 2023, on the Google Forms platform, administered online, unsupervised, with instructions for completion. The form provided information on data confidentiality and anonymity, and all respondents agreed to the terms. No personal data was collected. # 3 Results The survey included 13 professionals, encompassing roles such as data scientists, machine learning engineers, and product managers. This diversity of profiles enabled a comprehensive analysis of the practices and challenges encountered in the application of CRISP-DM and XP. # 3.1 CRISP-DM The results presented in Figure 2 indicated a high level of adherence to the use of CRISP-DM, with $8 6 \%$ of responses indicating usage between frequently and always. This demonstrated that CRISP-DM was widely adopted by the majority of respondents, with consistent application in their activities. Only $1 0 \%$ applied CRISP-DM occasionally, while 4% rarely or never used it. Fig. 2. Survey results on the application of CRISP-DM stages by the agile data science team Business Understanding: Part of the team reported a lack of alignment with business objectives. Additionally, the focus was generally on data exploration rather than on business goals. This phase was often led by product managers, with limited involvement from data scientists. Data Understanding: The company had well-established and previously explored databases. Additionally, hypotheses from prior projects reduced the need for these activities. When new hypotheses were required, both technical team members and product managers should participated, ensuring alignment with business objectives and goals. Data Preparation: Activities such as removing inconsistent records, formatting data, and aggregating similar attributes may occur concurrently with the Modeling phase. The CRISP-DM framework allows for overlap between the Data Preparation and Modeling phases[8], which can create the impression that Data Preparation activities are being carried out within the Modeling phase. Some of the team confused these activities and regarded them as part of the Modeling phase instead of Data Preparation. Modeling: This was the most well-known and widely adopted phase among data science team. Evaluation: The evaluation process was well known and practiced by the data science team. However, part of the model evaluation included A/B testing, which was conducted during the Deployment phase. This experimental approach presents two versions of the same element to different groups of users randomly, aiming to determine which version performs better against a business metric [18]. Deployment: In this phase, ML engineers were actively involved in contingency planning and monitoring, while data scientists contribute less, which explained data scientists never or rarely performing these activities. Data scientists mainly focused on data preparation and modeling, while ML engineers concentrated on deploying infrastructure and systems that support the model, including model monitoring. # 3.2 XP practices The results presented in Figure 3 indicated that 71% of responses reflected the use of agile XP practices between frequently and always. Additionally, 19% applied them occasionally, while $1 0 \%$ rarely or never use them. This suggested that agile XP practices were widely adopted, although a portion of respondents either used them sporadically or not at all, which may point to areas for improvement in their integration or awareness within the organization. Fig. 3. Survey results on XP Practices adoption by the agile data science team User stories: In the company, user stories were not always clear or understandable to product managers. The focus was often on technical terms, sometimes overlooking business rules. There was also confusion about the responsibilities for writing user stories and tasks, with many stories being written at a technical task level. Releases: Some ML engineers and data scientists were not familiar with how this process was applied internally within the company. Although the company had a long-term product roadmap, it was not updated quarterly as in XP release planning [11]. The roadmap maps the vision and direction of releases and outlines how the product portfolio will meet business objectives [19]. A gap existed between the technical team and the strategic side of the company regarding this practice and the long-term vision. Iterations: In the company, iterations typically lasted for two weeks. There was a difference in how delivery was perceived within iterations among data science professionals. Managers viewed deliveries as ready-to-use outputs for users, while data scientists and ML engineers focused more on finished features, though not always available for use. Exploratory analyses did not result in working software, but they still added value to users through reports, insights, graphs, and other deliverables. Slack: Smaller tasks, which can be postponed, are included to handle unexpected issues [11]. However, the tasks chosen for the iteration were essential for delivery, leaving no buffer, and slack practices were not always used. Additionally, the company encouraged extra project activities, such as study time and hours for personal use, but not everyone recognized them as part of slack practices. Whole Team: Certain skills and roles, like front-end developers and data engineers, were missing from the team. As a result, they depended on external members for specific tasks. Informative Workspace: Most team members used dashboards and charts to display relevant project information. This approach provided stakeholders with views of project progress and highlighted any obstacles encountered. Sustainable Pace: The company fostered a sustainable work culture by minimizing unproductive overtime and preventing employee overload. Pair programming: For the data science team, it was an effective practice for sharing knowledge and turning ideas into code. This applied to both architecture design and coding, as well as exchanging ideas in exploratory data science activities. However, when done for long periods and frequently, it can become exhausting [20]. Frequent Builds and Continuous Integration: For ML engineers, this practice required more robust infrastructure. Additionally, the lack of maturity in automating continuous integration processes made it difficult to fully adopt these practices. For data scientists, automated tests were rare during experimentation and data analysis, but more common during model deployment. Test-First Programming: This practice was the least adopted XP practice among the data science team. It is challenging to implement in analytical activities, while it is more commonly used during the development of software that supports the models. However, this practice can be applied during algorithm implementation, including tasks such as data preparation, model creation, tuning, training, and testing. This extends beyond the deployment phase. Therefore, it is necessary to train data scientists in this practice to ensure its effective adoption. Incremental Design: For managers, solutions in the first iteration often had high complexity, making incremental design challenging. It is important to avoid overly complex solutions or those that require more resources than necessary. When building models, selecting a small number of parameters that are easier to interpret and explain is essential [21]. Spikes: This practice was the most commonly used practice by the team. Spikes were primarily used to find answers to challenging problems and explore potential solutions. # 3.3 Combining the XP practices with CRISP-DM phases An XP practice can relate to multiple phases, just as a CRISP-DM phase can involve several XP practices. Figure 4 showcases the relationship between CRISPDM phases and XP practices. User Stories with CRISP-DM Phases: The team recognized that user stories could be created, refined, or consulted at any phase of CRISP-DM, though they used them less frequently in the Evaluation phase. In this phase, models are evaluated from the perspective of business goals [8], and user stories help ensure alignment with these objectives [10]. However, the technical language in user stories impacted their use in this stage by the team. Releases with CRISP-DM Phases: Releases practices were more common in the Business Understanding and Deployment phases, as these phases involved medium- and long-term planning and product delivery according to the plan. In the more analytical phases, such as Data Understanding, Data Preparation, and Modeling, their use was less frequent. However, the team understood that the Release plan can be created, consulted, refined, and updated at any stage of CRISP-DM. Iterations with CRISP-DM Phases: Iteration practices, including planning, using short cycles, and delivering in small increments, frequently integrated into all phases of CRISP-DM. Fig. 4. Survey results on the integration of CRISP-DM and XP Practices by the Agile Data Science Team Slacks with CRISP-DM Phases: Slacks practices were seldom adopted in the phases of CRISP-DM. Extraproject activities and free hours, such as time for training and skill development, were not linked to any specific phase. Additionally, data science activities allowed little buffer for unforeseen events Whole Team with CRISP-DM Phases: During the Business Understanding phase, the technical team (data scientists and ML engineers) had limited participation, while in the other phases, the business team (product managers) was minimally involved. As a result, the team recognized the need for a more integrated approach, with active participation from everyone in all phases of CRISP-DM. Informative Workspace with CRISP-DM Phases: The team frequently used dashboards and charts to display progress, risks, and project issues throughout all stages of CRISP-DM. Sustainable Pace with CRISP-DM Phases: The sustainable pace practice is frequently adopted in all phases of CRISP-DM, with minimal overtime, focusing on productivity while also caring for the teams well-being. Pair Programming Pace with CRISP-DM Phases: The pair programming practice was widely adopted across the phases of CRISP-DM. The team used it not only in technical activities but also in other activities such as requirements definition Frequent Builds and Continuous Integration with CRISP-DM Phases: This practice was rarely used by the team, even when it involved model creation and data processing, which required code manipulation. The team adopted this practice frequently only in the software development phase that utilizes the models, specifically during the Deployment phase. Test-First Programming with CRISP-DM Phases: The team used the Test-First Programming practice the least. However, they frequently applied it in the Deployment phase. Data scientists showed resistance to this approach when creating models, preparing data, and analyzing data, even when code manipulation was involved. Incremental Design with CRISP-DM Phases: This practice was more frequent in the Deployment phase. In the Modeling phase, the models were created with high complexity. Additionally, in the Business and Data Understanding phases, little importance was given to solution design. However, the culture of starting with a simple solution and gradually incrementing it should have been adopted from the very beginning. [14]. Spikes with CRISP-DM Phases: Spikes practices were used in all phases of CRISP-DM. In the Business Understanding and Data Understanding phases, the requirements, data, and problem definition were still immature. Therefore, the practice helped to explore insights and problems. In the Modeling phase, it was consistently applied to explore, compare, and discard various algorithms and models. # 3.4 Recommendations The adoption of XP and CRISP-DM in data science projects can be challenging. Issues such as misinterpreted user stories, lack of transparency in the product roadmap, discrepancies in team deliverables, lack of recognition of slack practices, non-multidisciplinary teams, infrequent testing, low maturity in continuous integration, and high model complexity highlight the need for targeted recommendations. The list below presents potential recommendations for the company: – Stories should be clear and made available to the entire team; – Stories should be written in business language and perspective; – Provide training for the data science team on writing user stories; – Hold workshops to present the strategic planning and roadmap to the entire team; – Align the definition of deliverables in data science; – Align the concept of slack time within the team Extra-project activities (e.g., studies and free time) can be considered slack time practices; – Assess the possibility of integrating new roles or training the team, such as front-end developers and data engineers; – Invest in MLOps [22] to improve the maturity and frequency of XP practices, such as frequent builds and continuous integration; – Train and encourage data scientists to use software engineering practices, such as testing, frequent builds and continuous integration; – Train the team in Behavior-Driven Development (BDD) [23] sing a common language understandable by data scientists, product managers, and ML engineers, promoting testing practices. – Encourage the data science team to think of solutions that start simpler (less complex). # 3.5 Threats to Validity The study was conducted within a single company, which limited the ability to generalize the results to other organizations or sectors. Additionally, the company had an organizational culture that favored the adoption of agile methodologies, which may not apply to companies facing greater resistance to change. Another limitation was that the study relied primarily on interviews and surveys regarding the perceptions of the team. Due to confidentiality concerns, no documentary evidence was analyzed to verify the adoption of the methods.
This study explores the integration of eXtreme Programming (XP) and the Cross-Industry Standard Process for Data Mining (CRISP-DM) in agile Data Science projects. We conducted a case study at the e-commerce company Elo7 to answer the research question: How can the agility of the XP method be integrated with CRISP-DM in Data Science projects? Data was collected through interviews and questionnaires with a Data Science team consisting of data scientists, ML engineers, and data product managers. The results show that 86% of the team frequently or always applies CRISP-DM, while 71% adopt XP practices in their projects. Furthermore, the study demonstrates that it is possible to combine CRISP-DM with XP in Data Science projects, providing a structured and collaborative approach. Finally, the study generated improvement recommendations for the company.
[ "cs.SE", "cs.AI", "cs.LG" ]
# 1. INTRODUCTION Let $( \mathsf { X } , \mathcal { X } )$ and $( \mathsf { Y } , \mathsf { y } )$ be measurable spaces, and let $( x _ { 1 } , y _ { 1 } ) , \ldots , ( x _ { n } , y _ { n } ) \in \mathsf X \times \mathsf Y$ represent a training data set drawn from random elements $( X _ { 1 } , Y _ { 1 } ) , \ldots , ( X _ { n } , Y _ { n } ) : \Omega \to \mathsf { X } \times \mathsf { Y } ,$ which are defined on a probability space $( \Omega , \mathscr { F } , { \mathbb { P } } )$ . In general, the probability measure $\mathbb { P }$ , or the distribution of the data-generating process $( X _ { 1 } , Y _ { 1 } ) , \dots , ( X _ { n } , Y _ { n } )$ , is not known. The primary goal of supervised machine learning is to construct a learning algorithm $\begin{array} { r } { \mathcal { A } : \bigcup _ { n = 1 } ^ { \infty } ( \mathsf { X } \times \mathsf { Y } ) ^ { n } \to \mathcal { H } } \end{array}$ that accurately predicts the functional relationship between the input (first coordinate) and output (second coordinate) observable, given a family of measurable functions $\mathcal H \subseteq \mathsf { Y } ^ { \mathsf { X } }$ , known as the hypothesis class. Even when an exact functional relationship exists, it is typically unknown and may not necessarily belong to $\mathcal { H }$ . More formally, given a measurable loss function $\mathcal { L } : \mathsf { Y } \times \mathsf { Y } \to [ 0 , \infty )$ , a probability measure $\mathbb { P }$ , and parameters $\varepsilon , \delta \in ( 0 , 1 )$ , the objective is to construct an algorithm $\mathcal { A }$ and determine $n ( \mathbb { P } , \varepsilon , \delta ) \in \mathbb { N }$ such that $$ \mathbb { P } \big ( | \mathrm { e r } _ { \mathbb { P } } ( \mathcal { A } ( ( X _ { 1 } , Y _ { 1 } ) , \ldots , ( X _ { n } , Y _ { n } ) ) - \operatorname* { i n f } _ { h \in \mathcal { H } } \mathrm { e r } _ { \mathbb { P } } ( h ) | < \varepsilon \big ) \ge 1 - \delta \qquad \forall n \ge n ( \mathbb { P } , \varepsilon , \delta ) , $$ where $\mathrm { e r } _ { \mathbb { P } } ( h ) \ : = \ \mathbb { E } _ { \mathbb { P } } [ \mathscr { L } ( h ( X ) , Y ) ]$ is the expected loss of hypothesis $h \in { \mathcal { H } }$ under $\mathbb { P }$ . The smallest $n ( \mathbb { P } , \varepsilon , \delta )$ satisfying this condition is known as the sample complexity of $\mathcal { A }$ . A common approach to this problem is Probably Approximately Correct (PAC) learnability, which assumes that the sample complexity of a learning algorithm does not depend on the distribution $\mathbb { P }$ of the data (see [2], [25], and [29]). However, this assumption is often difficult to justify. By treating all possible data distributions equally, it can lead to restrictive choices of hypothesis classes or suboptimal sample complexity. An alternative perspective, which this article focuses on, is based on universal consistency and learning rates, where learning rates explicitly depend on the data distribution (see [25] and [29]). A classical assumption in both approaches is that the elements $( X _ { 1 } , Y _ { 1 } ) , \dots , ( X _ { n } , Y _ { n } )$ are i.i.d., as discussed in standard references such as [2], [25], and [29]. However, in many real-world supervised learning tasks, the data exhibit temporal dependence and strong correlations between observations, making the i.i.d. assumption unrealistic. This work examines the case where the training dataset is drawn from an iterated random function (see below for the precise definition of this process), establishing the learnability of the corresponding approximate empirical risk minimization algorithm with data-distribution dependent sample complexity, expressed in terms of the Rademacher complexities of $\mathcal { H }$ . A common application of this processes is in generating and modeling image data (see Example 2.2 and [8] for more details). Consequently, they can be effectively utilized in supervised machine learning tasks related to visualization, such as text classification, image recognition, and object detection. Formally, let $Z \subseteq \mathsf { X } \times \mathsf { Y }$ be a complete and separable metric space with bounded metric ${ \bf d } _ { Z }$ . Without loss of generality, assume ${ \bf d } _ { Z }$ is bounded by 1; otherwise it can be normalized as $\mathrm { d } _ { \scriptscriptstyle Z } / \operatorname* { s u p } _ { z , \bar { z } \in \cal Z } \mathrm { d } _ { \scriptscriptstyle Z } ( z , \bar { z } )$ . This assumption is not restrictive, as many machine learning problems naturally operate within bounded domains. The space $z$ is equipped with its Borel $\sigma$ -algebra $\mathfrak { B } ( Z )$ , generated by ${ \bf d } _ { Z }$ . Additionally, the projection mappings $\operatorname { p r } _ { \times } : Z \to \times$ and $\mathrm { p r } _ { \mathsf { Y } } : Z \to \mathsf { Y }$ satisfy the conditions $\mathrm { p r } _ { \mathsf { X } } ^ { - 1 } ( \mathrm { p r } _ { \mathsf { X } } ( Z ) \cap \mathcal { X } ) \subseteq \mathfrak { B } ( Z )$ and $\mathfrak { p r } _ { \mathbb { Y } } ^ { - 1 } ( \mathfrak { p r } _ { \mathbb { Y } } ( Z ) \cap \mathcal { Y } ) \subseteq \mathfrak { B } ( Z )$ , ensuring the measurability of $\mathrm { p r } _ { \mathsf { X } }$ and $\mathrm { p r } _ { \mathsf { Y } }$ . Definition 1.1 (Iterated random function). Let $Z _ { 0 }$ be a random element taking values in $z$ , and let $\{ \vartheta _ { n } \} _ { n \geq 1 }$ be a sequence of i.i.d. random elements, independent of $Z _ { 0 }$ , taking values in a measurable space $\Theta$ . Consider a measurable function $F : Z \times \Theta Z$ , where ${ Z \times \Theta }$ is endowed with the product $\sigma$ -algebra. Both $Z _ { 0 }$ and $\{ \vartheta _ { n } \} _ { n \geq 1 }$ are defined on a probability space $( \Omega , { \mathscr { F } } , { { \mathbb { P } } } )$ . For $n \geq 1$ , iteratively define $Z _ { n } : = F ( Z _ { n - 1 } , \vartheta _ { n } )$ . The resulting process $\{ Z _ { n } \} _ { n \geq 0 }$ forms a time-homogeneous Markov chain on 𝖹, known as the iterated random function. The process $\{ Z _ { n } \} _ { n \ge 0 } ~ = ~ \{ ( \mathrm { p r } _ { \times } { \cal { C } } _ { n } ) , \mathrm { p r } _ { \forall } { \cal { C } } _ { n } ) ) \} _ { n \ge 0 }$ represents input data (the first component) paired with their corresponding outputs (the second component). The primary objective is to learn a function from a given hypothesis class $\mathcal { H }$ that, given a training data set $z _ { 0 } , \dotsc , z _ { n - 1 } \in Z$ drawn from the first $n$ samples of $\{ Z _ { n } \} _ { n \geq 0 }$ , best approximates the relationship between the input and output, where the sample complexity is data-distribution dependent and expressed in terms of the Rademacher complexities of $\mathcal { H }$ . To the best of our knowledge, [3] and this work provide the first data-distribution dependent sample complexity bounds in a Markov chain setting. Existing studies in the non-i.i.d. framework typically consider classes of mixing time-series processes or irreducible and aperiodic ergodic Markov chains (such as in [3]; see the literature review below). In contrast, our approach does not require the irreducibility or aperiodicity of $\{ Z _ { n } \} _ { n \geq 0 }$ . A Markov chain $\{ Z _ { n } \} _ { n \geq 0 }$ is irreducible if there exists a $\sigma$ -finite measure $\varphi ( \mathrm { d } z )$ on $\mathfrak { B } ( Z )$ such that for any measurable set $B$ with $\varphi ( B ) > 0$ , we have $\textstyle \sum _ { n = 0 } ^ { \infty } \mathbb { P } ^ { z } ( Z _ { n } \in B ) > 0$ for all $z \in { \cal Z }$ . It is aperiodic if no partition $\{ B _ { 1 } , \dots , B _ { k } \} \subseteq { \mathfrak { B } } ( Z )$ with $k \geq 2$ exists such that $\mathbb { P } ^ { z } ( Z _ { 1 } \in B _ { i + 1 } ) = 1$ for all $z \in B _ { i }$ and $1 \leq i \leq k - 1$ , and $\mathbb { P } ^ { z } ( Z _ { 1 } \in B _ { 1 } ) = 1$ for all $z \in B _ { k }$ . A typical example of such process is as follows. Let $\{ X _ { n } \} _ { n \geq 0 }$ be an irreducible and aperiodic Markov chain on $\mathsf { X }$ , and let $h _ { 0 } : \mathsf { X } \to \mathsf { Y }$ be measurable. According to [28, Lemma 3.1], the process $Z _ { n } = ( X _ { n } , h _ { 0 } ( X _ { n } ) )$ , $n \geq 0$ , forms an irreducible and aperiodic Markov chain on $Z = \{ ( x , h _ { 0 } ( x ) ) : x \in \mathsf { X } \}$ . Here, the first component represents the system state, such as a vector of cepstral coefficients in speech recognition, the position and velocity of a moving object’s center of gravity in object tracking, or the category of a unit-time price change in market prediction. The second component corresponds to the label associated with each state, such as the emotional state of a speaker, the temporal distance of a tracked object from a reference point, or trading activity (buy/sell/wait). Irreducibility and aperiodicity ensure that the system can transition between states with positive probability and does not exhibit cyclic behavior over finite time steps. However, in certain scenarios, these assumptions may be unrealistic, as some states might not be reachable from all others. This motivates the study of data-generating processes (Markov chains) that do not necessarily possess these properties. The labeling function $h _ { 0 } ( x )$ is typically unknown. Given a training dataset and a hypothesis class $\mathcal { H }$ (which does not necessarily contain $h _ { 0 } ( x ) )$ , the goal is to construct a learning algorithm that selects a hypothesis $h \in { \mathcal { H } }$ that best approximates the true labeling function. # 2. MAIN RESULTS Before presenting the main results, we introduce the notation used throughout the article. For $z \in { \cal Z }$ , let $\mathbb { P } ^ { z } ( \cdot )$ denotes $\mathbb { P } ( \cdot | Z _ { 0 } = x )$ . Given a probability measure $\mu ( \mathrm { d } z )$ on 𝖹 (representing the distribution of $Z _ { 0 }$ ), define| $\begin{array} { r } { \bar { \mathbb { P } ^ { \sharp } } ( \cdot ) : = \int _ { Z } \mathbb { P } ^ { z } ( \cdot ) \ d \mu ( \mathrm { d } z ) } \end{array}$ . For $n \geq 1$ , the $n$ -step transition functions of $\{ Z _ { n } \} _ { n \geq 0 }$ are given by ${ \mathcal { P } } ^ { n } ( z , \mathrm { d } { \bar { z } } ) : = { \mathbb { P } } ^ { z } ( Z _ { n } \in \mathrm { d } { \bar { z } } )$ when starting from $z$ , and $\mu ^ { \mathcal { P } ^ { n } } ( \mathrm { d } z ) : =$ $\mathbb { P } ^ { \mu } ( Z _ { n } \in \mathrm { d } z )$ when the initial distribution is $\mu ( \mathrm { d } z )$ . We impose the following assumption (recall that $\{ \vartheta _ { n } \} _ { n \geq 1 }$ is defined on the probability space $( \Omega , \mathscr { F } , { \mathbb { P } } ) )$ : (A1): For all $z , { \bar { z } } \in Z$ and $\theta \in \Theta$ , there exists a measurable function $\ell : \Theta \to [ 0 , \infty )$ such that $$ \begin{array} { r } { \boldsymbol { \mathrm { d } } _ { \boldsymbol { \Xi } } ( F ( z , \theta ) , F ( \bar { z } , \theta ) ) \leq \ell ( \theta ) \boldsymbol { \mathrm { d } } _ { \boldsymbol { \Xi } } ( z , \bar { z } ) \qquad \mathrm { a n d ~ } \qquad \boldsymbol { \ell } _ { F } : = \mathbb { E } [ \ell ( \vartheta _ { 1 } ) ] < 1 . } \end{array} $$ In assumption (A1), $\ell ( \theta )$ is defined as the smallest $\ell \geq 0$ such that $$ \begin{array} { r l } { \mathsf { d } _ { \boldsymbol { Z } } ( F ( \boldsymbol { z } , \theta ) , F ( \bar { \boldsymbol { z } } , \theta ) ) \le \ell \mathsf { d } _ { \boldsymbol { Z } } ( \boldsymbol { z } , \bar { \boldsymbol { z } } ) \qquad \forall \boldsymbol { z } , \bar { \boldsymbol { z } } \in \mathsf { Z } . } \end{array} $$ According to [8, Lemma 5.1], the mapping $\theta \mapsto \ell ( \theta )$ is measurable. Hence, $\ell _ { F }$ is well defined. Notably, when $\Theta = Z$ and $F ( z , \theta ) = \theta$ for all $( z , \theta ) \in Z { \times } \Theta$ , assumption (A1) holds trivially. This demonstrates that the class of data-generating processes considered in this article generalizes the classical i.i.d. case. Let $\mathcal { P } ( Z )$ denote the set of all probability measures on $z$ . The $\mathrm { L } ^ { 1 }$ - Wasserstein distance on $\mathcal { P } ( Z )$ is given by $$ \mathcal { W } ( \mu _ { 1 } , \mu _ { 2 } ) : = \operatorname* { i n f } _ { \Pi \in C ( \mu _ { 1 } , \mu _ { 2 } ) } \int _ { Z \times Z } \mathrm { d } _ { Z } ( z , \bar { z } ) \Pi ( \mathrm { d } z , \mathrm { d } \bar { z } ) , $$ where ${ \mathcal { C } } ( \mu _ { 1 } , \mu _ { 2 } )$ is the set of all couplings of $\mu _ { 1 } ( \mathrm { d } z )$ and $\mu _ { 2 } ( \mathrm { d } \bar { z } )$ , meaning that $\Pi \in { \cal C } ( \mu _ { 1 } , \mu _ { 2 } )$ is a probability measure on $z { \times } { } z$ with marginals $\mu _ { 1 } ( \mathrm { d } z )$ and $\mu _ { 2 } ( \mathrm { d } \bar { z } )$ . By the Kantorovich-Rubinstein theorem, $$ \mathcal { W } ( \mu _ { 1 } , \mu _ { 2 } ) = \operatorname* { s u p } _ { \{ f : \mathrm { L i p } ( f ) \leq 1 \} } | \mu _ { 1 } ( f ) - \mu _ { 2 } ( f ) | , $$ where the supremum is taken over all Lipschitz functions $f : Z \to \mathbb { R }$ with Lipschitz constant $\operatorname { L i p } ( f ) \leq 1$ , defined as the smallest $L \geq 0$ for which $$ | f ( z ) - f ( \bar { z } ) | \leq L \mathrm { d } _ { { \cal Z } } ( z , \bar { z } ) \qquad \forall z , \bar { z } \in { \cal Z } . $$ For $\mu \in \mathcal { P } ( Z )$ and a measurable function $f : Z \to \mathbb { R }$ , the notation $\mu ( f )$ represents the integral $\int _ { Z } f ( z ) \mu ( \mathrm { d } z )$ , whenever well defined. It is well known that $( { \mathcal { P } } ( Z ) , { \mathcal { W } } )$ is a complete separable metric space (see, e.g., [33, Theorem 6.18]). From assumption (A1), it follows that $$ \begin{array} { r } { \mathcal { W } ( \mathcal { P } ( z , \mathrm { d } w ) , \mathcal { P } ( \bar { z } , \mathrm { d } w ) ) \leq \mathbb { E } [ \mathrm { d } _ { \mathbb { Z } } ( F ( z , \vartheta _ { 1 } ) , F ( \bar { z } , \vartheta _ { 1 } ) ) ] \leq \ell _ { F } \mathrm { d } _ { \mathbb { Z } } ( z , \bar { z } ) \qquad \forall z , \bar { z } \in \mathbb { Z } . } \end{array} $$ In particular, for any Lipschitz function $f : Z \to \mathbb { R }$ , $$ | \mathcal { P } ( f ) ( z ) - \mathcal { P } ( f ) ( \bar { z } ) | \leq \mathrm { L i p } ( f ) \ell _ { F } \mathrm { d } _ { \overline { { Z } } } ( z , \bar { z } ) \qquad \forall z , \bar { z } \in \mathsf { Z } , $$ which implies that $z \mapsto { \mathcal { P } } ( f ) ( z )$ is Lipschitz with Lipschitz constant at most Lip $( f ) \ell _ { F }$ . Consequently, for any $\mu _ { 1 } , \mu _ { 2 } \in \mathcal { P } ( Z )$ , $$ \mathcal { W } ( \mu _ { 1 } \mathcal { P } , \mu _ { 2 } \mathcal { P } ) \leq \ell _ { F } \mathcal { W } ( \mu _ { 1 } , \mu _ { 2 } ) . $$ Since $\ell _ { F } < 1$ , the mapping $\mu \mu \mathcal { P }$ is a contraction on $\mathcal { P } ( Z )$ . By the Banach fixed point theorem, there exists a unique $\pi \in { \mathcal { P } } ( Z )$ satisfying $\pi { \mathcal { P } } ( \mathrm { d } z ) = \pi ( \mathrm { d } z )$ , meaning that $\pi ( \mathrm { d } \boldsymbol { z } )$ is the unique invariant probability measure of $\{ Z _ { n } \} _ { n \geq 0 }$ . Moreover, for any $n \geq 1$ and $\mu \in \mathcal { P } ( Z )$ , $$ \mathcal { W } ( \mu ^ { p ^ { n } } , \pi ) = \mathcal { W } ( \mu ^ { p ^ { n } } , \pi ^ { p ^ { n } } ) \leq \ell _ { F } \mathcal { W } ( \mu ^ { p ^ { n - 1 } } , \pi ^ { p ^ { n - 1 } } ) \leq \cdots \leq \ell _ { F } ^ { n } \mathcal { W } ( \mu , \pi ) . $$ Further, for $h \in { \mathcal { H } }$ , define the function $\mathcal { L } _ { h } : Z \to [ 0 , \infty )$ by $$ \mathcal { L } _ { h } ( z ) : = \mathcal { L } ( h ( \mathrm { p r } _ { \mathsf { X } } ( z ) ) , \mathrm { p r } _ { \mathsf { Y } } ( z ) ) . $$ We now impose the following assumption on the loss function $\mathcal { L }$ and the hypothesis class $\mathcal { H }$ : (A2): There exists a constant $\ell _ { \mathcal { H } } > 0$ such that $$ \mathcal { L } _ { h } ( z ) \leq \ell _ { \mathcal { H } } \quad \mathrm { a n d } \quad | \mathcal { L } _ { h } ( z ) - \mathcal { L } _ { h } ( \bar { z } ) | \leq \ell _ { \mathcal { H } } \mathrm { d } _ { { \sf Z } } ( z , \bar { z } ) \qquad \forall z , \bar { z } \in { \sf Z } , h \in \mathcal { H } . $$ Examples satisfying assumptions (A1) and (A2) are provided in Examples 2.2 to 2.4. For $n , m \in$ $\mathbb { N }$ and $h \in { \mathcal { H } }$ , let $\begin{array} { r } { \hat { \mathrm { e r } } _ { n } ( h ) \ : = \ \frac 1 n \sum _ { i = 0 } ^ { n - 1 } \mathcal { L } _ { h } ( Z _ { i } ) , \hat { \mathrm { e r } } _ { n , m } ( h ) \ : = \ \frac 1 { m - n } \sum _ { i = n } ^ { m - 1 } \mathcal { L } _ { h } ( Z _ { i } ) } \end{array}$ (when $m > n ,$ ), $\operatorname { e r } _ { \pi } ( h ) : = \pi ( { \mathcal { L } } _ { h } )$ and $\begin{array} { r } { \mathrm { o p t } _ { \pi } ( \mathcal { H } ) : = \operatorname* { i n f } _ { h \in \mathcal { H } } \mathrm { e r } _ { \pi } ( h ) } \end{array}$ . For $\varepsilon > 0$ , the $\varepsilon$ -approximate empirical risk minimization ( $\dot { \varepsilon }$ -ERM) algorithm for $\mathcal { H }$ is defined as a mapping $\begin{array} { r } { A ^ { \varepsilon } : \bigcup _ { n = 1 } ^ { \infty } Z ^ { n } \to \mathcal { H } } \end{array}$ satisfying $$ { \frac { 1 } { n } } \sum _ { i = 0 } ^ { n - 1 } { \mathcal { L } } _ { { \mathcal { A } } ^ { \varepsilon } ( z _ { 0 } , \ldots , z _ { n - 1 } ) } ( z _ { i } ) \leq \operatorname* { i n f } _ { h \in { \mathcal { H } } } { \frac { 1 } { n } } \sum _ { i = 0 } ^ { n - 1 } { \mathcal { L } } _ { h } ( z _ { i } ) + \varepsilon . $$ For further details on $\varepsilon$ -ERM, we refer the reader to [2] and the reference therein. Finally, let $\{ \sigma _ { n } \} _ { n \geq 0 }$ be an i.i.d. sequence of symmetric Bernoulli random variables (taking values in $\{ - 1 , 1 \} ,$ ) defined on a probability space $( \Omega ^ { \sigma } , \mathcal { F } ^ { \sigma } , \mathbb { P } ^ { \sigma } )$ , independent of $\{ Z _ { n } \} _ { n \geq 0 }$ . The $n$ -empirical Rademacher complexity of the function class $\{ \mathcal { L } _ { h } : h \in \mathcal { H } \}$ with respect to $z _ { 0 } , \dotsc , z _ { n - 1 } \in Z$ is given by $$ \hat { \mathcal { R } } _ { n , ( z _ { 0 } , \ldots , z _ { n - 1 } ) } ( \mathcal { H } ) : = \mathbb { E } ^ { \sigma } \left[ \operatorname* { s u p } _ { h \in \mathcal { H } } \frac { 1 } { n } \sum _ { i = 0 } ^ { n - 1 } \sigma _ { i } \mathcal { L } _ { h } ( z _ { i } ) \right] . $$ Similarly, the $( n , \mu )$ -Rademacher complexity of $\{ \mathcal { L } _ { h } : h \in \mathcal { H } \}$ with respect to $\{ Z _ { n } \} _ { n \geq 0 }$ with initial distribution $\mu ( \mathrm { d } z )$ , is defined as $\mathcal { R } _ { n , \mu } ( \mathcal { H } ) : = \mathbb { E } ^ { \mu } [ \hat { \mathcal { R } } _ { n , ( Z _ { 0 } , \ldots , Z _ { n - 1 } ) } ( \mathcal { H } ) ]$ . The Rademacher complexity measures, on average, how well the function class $\{ \mathcal { L } _ { h } \colon h \in \mathcal { H } \}$ correlates with random noise $\{ \sigma _ { n } \} _ { n \ge 0 }$ on the given dataset. In general, richer or more complex function classes tend to have higher Rademacher complexity, as they exhibit stronger correlations with random noise. For further details on Rademacher complexity, see [25] and [29]. We now state the main result of this article. Theorem 2.1. Assume (A1) and (A2). For any $\mu \in \mathcal { P } ( Z )$ and $ { \varepsilon } \in ( 0 , 1 )$ , we have $\begin{array} { r l } & { \mathbb { P } ^ { \mu } \left( | \mathrm { e r } _ { \pi } \big ( \mathcal { A } ^ { \varepsilon } ( Z _ { 0 } , \ldots , Z _ { n - 1 } ) \big ) - \mathrm { o p t } _ { \pi } ( \mathcal { H } ) | < 4 \mathcal { R } _ { n , \pi } ( \mathcal { H } ) + 2 \ell _ { \mathcal { H } } \ell _ { F } ^ { n } \mathcal { W } ( \mu , \pi ) + 4 \varepsilon \right) } \\ & { \ \geq 1 - 2 \mathrm { e } ^ { - 2 \varepsilon ^ { 2 } n / ( \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) ) ^ { 2 } } . } \end{array}$ Furthermore, for any $ { \varepsilon } \in ( 0 , 1 )$ , it holds that $$ \begin{array} { r l } & { \mathbb { P } ^ { \pi } \left( | \mathrm { e r } _ { \pi } \big ( \mathcal { A } ^ { \varepsilon } ( Z _ { 0 } , \dots , Z _ { n - 1 } ) \big ) - \mathrm { o p t } _ { \pi } ( \mathcal { H } ) | < 4 \hat { \mathcal { R } } _ { n , ( Z _ { 0 } , \dots , Z _ { n - 1 } ) } ( \mathcal { H } ) + 6 \varepsilon \right) } \\ & { \ \ge 1 - 2 \mathrm { e } ^ { - 2 \varepsilon ^ { 2 } n / ( \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) ) ^ { 2 } } . } \end{array} $$ The proof of Theorem 2.1 follows a standard approach (see, e.g., [25]). Specifically, leveraging the contractivity (in the first variable) of the governing function of $\{ Z _ { n } \} _ { n \geq 0 }$ , along with the Lipschitz continuity of the loss function and the hypothesis class, and applying a Hoeffding’s-type inequality, we first establish a uniform convergence result for the corresponding sample error in terms of the Rademacher and empirical Rademacher complexities of $\mathcal { H }$ (see Lemmas 4.1, 4.2 and 4.4). This result then allows us to conclude the learnability of the approximate empirical risk minimization algorithm and derive its generalization bounds. To the best of our knowledge, the only closely related result appears in [3], where the authors obtain data-dependent learning rates (expressed in terms of Rademacher complexities) for a class of irreducible and aperiodic ergodic Markov chains that admit an atom. However, unlike [3], our work does not assume irreducibility, aperiodicity, or atomic structure of the underlying Markov chain (modeled here as an iterated random function). As in the classical i.i.d. setting (see [25]), the learning rate obtained in Theorem 2.1 is exponential. Notably, a similar rate (with different constants) has also been established in [3] and in other works addressing the non-i.i.d. setting, including [11], [18], [19], and [24]. Next, we recall that if $\textsf { Y }$ is finite (as in classification or ranking problems), then from [25, Theorem 3.7] (with $r : = L _ { \mathcal { H } } \sqrt { n } )$ , it follows that $$ \hat { \mathscr { R } } _ { n , ( z _ { 0 } , \ldots , z _ { n - 1 } ) } ( \mathcal { H } ) \leq L _ { \mathcal { H } } \sqrt { \frac { 2 \log \mathfrak { r } _ { \{ \mathscr { L } _ { h } : \mathscr { h } \in \mathcal { H } \} } ( n ) } { n } } . $$ Here, $L _ { \mathcal { H } } : = \operatorname* { s u p } _ { z \in Z , h \in \mathcal { H } } \mathcal { L } _ { h } ( z )$ and $\mathbf { r } _ { \{ \mathcal { L } _ { h } : h \in \mathcal { H } \} } : \mathbb { N } \to \mathbb { N }$ denotes the growth function of the class $\{ \mathcal { L } _ { h } : h \in \mathcal { H } \}$ (see, e.g., [2, Section 3.2]. By definition, $$ \boldsymbol { \mathrm { r } } _ { \{ \mathcal { L } _ { h } : h \in \mathcal { H } \} } ( n ) \leq \operatorname* { m i n } \{ \mathrm { c a r d } ( \mathcal { H } ) , \mathrm { c a r d } ( \{ \mathcal { L } _ { h } ( z ) : z \in \mathsf { Z } , h \in \mathcal { H } ) ^ { n } \} . $$ In particular, for binary classification (i.e., when card $| ( \mathsf { Y } ) = 2 \rrangle$ ), we obtain $$ { \hat { \mathcal { R } } } _ { n , ( z _ { 0 } , \ldots , z _ { n - 1 } ) } ( \mathcal { H } ) \leq L _ { \mathcal { H } } { \sqrt { \frac { 2 \mathrm { V C } ( \{ { \mathcal { L } } _ { h } : h \in \mathcal { H } \} ) \log ( \mathrm { e n } / \mathrm { V C } ( \{ { \mathcal { L } } _ { h } : h \in \mathcal { H } \} ) ) } { n } } } , $$ where $\operatorname { V C } ( \{ \mathcal { L } _ { h } \} ; h \in \mathcal { H } \} )$ ) denotes the Vapnik-Chervonenkis dimension of $\{ \mathcal { L } _ { h } : h \in \mathcal { H } \}$ (see [2, Corollary 3.8]). On the other hand, if $\{ \mathcal { L } _ { h } : h \in \mathcal { H } \}$ consists of bounded functions (so that 𝖸 need not necessarily be finite), then $$ \hat { \mathcal { R } } _ { n , ( z _ { 0 } , \ldots , z _ { n - 1 } ) } ( \mathcal { H } ) \leq \operatorname* { i n f } _ { \alpha \geq 0 } \left\{ 4 \alpha + \frac { c _ { 1 } } { \sqrt { n } } \int _ { \alpha } ^ { 1 } \sqrt { \mathrm { f a t } _ { \delta } ( \{ \mathcal { L } _ { h } : h \in \mathcal { H } \} ) \log ( c _ { 2 } / \delta ) } \mathrm { d } \delta \right\} , $$ where the constants $c _ { 1 }$ and $c _ { 2 }$ depend only on the boundary points of the interval containing the images of $\{ \mathcal { L } _ { h } : h \in \mathcal { H } \}$ . The term $\mathrm { { f a t } } _ { \delta } ( \{ \mathcal { L } _ { h } : h \in \mathcal { H } \} )$ represents the $\delta$ -fat-shattering dimension of $\{ \mathcal { L } _ { h } : h \in \mathcal { H } \}$ (see [30, Lecture 12]). Let us now present several examples of iterated random functions that satisfy the conditions of Theorem 2.1. We begin with an example designed to generate image data. Example 2.2. Let $d , m \in \mathbb { N }$ and $R > 0$ , and let $\ b { \mathsf { X } } = \mathbb { R } ^ { d }$ and $\mathsf { Y } = \mathbb { R } ^ { m }$ . We equip these spaces with the standard Euclidean norms, denoted by $\left\| \cdot \right\| _ { \mathsf { X } }$ and $\lVert \cdot \rVert _ { \mathsf { Y } }$ , respectively. Further, let $\boldsymbol { \eta } ( \mathrm { d } \boldsymbol { x } )$ be a probability measure on $\mathsf { x }$ satisfying $\boldsymbol { \eta } ( \bar { B } _ { \times } ( 0 , R ) ^ { c } ) = 0$ , ‖wh‖ere $\bar { B } _ { \times } ( x _ { 0 } , \rho )$ denotes the closed ball of radius $\rho > 0$ centered at $x _ { 0 } \in \mathsf { X }$ . The measure $\boldsymbol { \eta } ( \mathrm { d } \boldsymbol { x } )$ represents an image contained within the closed ball of radius $R$ around the origin. According to [8], for any $k \geq 2$ , there exist: (i) affine transformations $( a _ { 1 } , b _ { 1 } ) , \ldots , ( a _ { k } , b _ { k } ) \in \mathbb { R } ^ { d \times d } \times \mathbb { R } ^ { d \times 1 }$ , where each $a _ { i }$ is a contraction, i.e., $\| a _ { i } \| < 1$ (where $\left\| \cdot \right\|$ denotes the spectral norm) (ii) a pro‖bab‖ility measure‖ $\boldsymbol { \mathsf { v } } ( \mathrm { d } i )$ on $\{ 1 , \ldots , k \}$ , such that for an i.i.d. sequence $\{ \vartheta _ { n } \} _ { n \geq 1 } = \{ ( A _ { n } , B _ { n } ) \} _ { n \geq 1 }$ on $\Theta = \{ ( a _ { 1 } , b _ { 1 } ) , \dots , ( a _ { k } , b _ { k } ) \}$ with distribution $\boldsymbol { \mathsf { v } } ( \mathrm { d } i )$ , a random variable $X _ { 0 }$ on $\bar { B } _ { \times } ( 0 , R + r )$ independent of $\{ \vartheta _ { n } \} _ { n \geq 1 }$ , where $r =$ $\operatorname* { m a x } _ { 1 , . . . k } \| b _ { i } \| _ { \mathsf { X } }$ , and for any $f : \bar { B } _ { \times } ( 0 , R + r ) { \times } \Theta \to \bar { B } _ { \times } ( 0 , R + r )$ satisfying $f ( x , ( a _ { i } , b _ { i } ) ) = a _ { i } x + b _ { i }$ for $x \in \bar { B } _ { \times } ( 0 , R )$ and $\| f ( x , ( a _ { i } , b _ { i } ) ) - f ( { \bar { x } } , ( a _ { i } , b _ { i } ) ) \| _ { \mathsf { X } } \leq \| a _ { i } \| \| x - { \bar { x } } \| _ { \mathsf { X } }$ , the sequence $$ X _ { n } : = f ( X _ { n - 1 } , \vartheta _ { n } ) , \qquad n \geq 1 , $$ defines a Markov chain on $\bar { B } _ { \times } ( 0 , R + r )$ with invariant probability measure $\boldsymbol { \mathsf { \Pi } } \boldsymbol { \mathsf { \Pi } } \boldsymbol { \mathsf { \Pi } } \boldsymbol { \mathsf { \Pi } } \boldsymbol { \mathsf { \Pi } } \boldsymbol { \mathsf { \Pi } } \boldsymbol { \mathsf { \Pi } } \boldsymbol { \mathsf { \Pi } } \boldsymbol { \mathsf { \Pi } } \boldsymbol { \mathsf { \Pi } }$ . Denote the transition function of $\{ X _ { n } \} _ { n \geq 0 }$ by $\mathcal { P } _ { X } ( x , \mathrm { d } \bar { x } )$ . Then, for any $n \geq 1$ and $\mu \in \mathcal { P } ( \hat { B } _ { \mathsf { X } } ( 0 , R + r ) )$ , it holds that $$ \mathcal { W } ( | \mu ^ { \mathcal { P } _ { \chi } ^ { n } } , \eta ) = \mathcal { W } ( | \mu \mathcal { P } _ { \chi } ^ { n } , \eta \mathcal { P } _ { \chi } ^ { n } ) \leq \mathbb { E } [ \| \vartheta _ { 1 } \| ] \mathcal { W } ( | \mu \mathcal { P } _ { \chi } ^ { n - 1 } , \eta \mathcal { P } _ { \chi } ^ { n - 1 } ) \leq \cdots \leq \mathbb { E } [ \| \vartheta _ { 1 } \| ] ^ { n } \mathcal { W } ( \mu , \eta ) . $$ Since each $a _ { i }$ is a contraction, we have $\mathbb { E } [ \Vert \vartheta _ { 1 } \Vert ] < 1$ , implying that $\boldsymbol { \eta } ( \mathrm { d } \boldsymbol { x } )$ is the unique invariant probability measure of $\{ X _ { n } \} _ { n \geq 0 }$ , and $\mu \mathcal { P } _ { X } ^ { n } ( \mathrm { d } x )$ approximates the image given by $\boldsymbol { \eta } ( \mathrm { d } \boldsymbol { x } )$ in the Wasserstein sense. Next, let $z$ be the closed ball of radius $R + r$ around the origin in ${ \sf X } \times { \sf Y }$ , equipped with the metric $$ \mathsf { d } _ { \boldsymbol { Z } } ( \boldsymbol { z } , \boldsymbol { \bar { z } } ) : = \frac { \| \boldsymbol { x } - \boldsymbol { \bar { x } } \| _ { \mathsf { x } } + \| \boldsymbol { y } - \boldsymbol { \bar { y } } \| _ { \mathsf { Y } } } { 4 ( R + r ) } , \qquad \boldsymbol { z } = ( \boldsymbol { x } , \boldsymbol { y } ) , \boldsymbol { \bar { z } } = ( \boldsymbol { \bar { x } } , \boldsymbol { \bar { y } } ) \in \mathsf { Z } . $$ Observe that $\mathrm { p r } _ { \mathsf { X } } ( Z ) = \bar { B } _ { \mathsf { X } } ( 0 , R + r )$ and ${ \bf d } _ { Z }$ is bounded by 1. Further, let $h _ { 0 } \colon \mathrm { p r } _ { \mathsf { X } } ( Z ) \to \mathrm { p r } _ { \mathsf { Y } } ( Z )$ be Lipschitz and satisfying $\mathbb { E } [ \| \vartheta _ { 1 } \| ] ( 1 + \mathrm { L i p } ( h _ { 0 } ) ) < 4 ( R + r )$ . Finally, define $F : Z \times \Theta \to Z$ by $$ F ( z , \theta ) : = ( f ( x , \theta ) , h _ { 0 } ( f ( x , \theta ) ) ) , \qquad z = ( x , y ) \in \mathsf { Z } . $$ Since, $$ \ell ( \theta ) \leq \frac { \| \theta \| ( 1 + \mathrm { L i p } ( h _ { 0 } ) ) } { 4 ( R + r ) } , $$ by assumption, $\ell _ { F } < 1$ , ensuring that assumption (A1) holds. Consequently, the process $Z _ { n } : =$ $F ( Z _ { n - 1 } , \vartheta _ { n } ) = ( X _ { n } , h _ { 0 } ( X _ { n } ) )$ , $n \geq 1$ , with $Z _ { 0 } : = ( X _ { 0 } , h _ { 0 } ( X _ { 0 } ) )$ , defines a Markov chain on $z$ with a unique invariant probability measure $\pi ( \mathrm { d } z )$ . Assumption (A2) holds, for instance, if $\mathcal { L }$ is Lipschitz, the hypothesis class $\mathcal { H }$ consists of Lipschitz functions satisfying $\operatorname* { s u p } _ { h \in { \mathcal { H } } } \operatorname { L i p } ( h ) < \infty .$ and the function $( z , h ) \mapsto \mathcal { L } _ { h } ( z )$ is bounded. □ The previous example can be be placed in a more general framework. Example 2.3. Let $\mathrm { \Delta } X$ and $\textsf { Y }$ be separable and complete metric spaces equipped with metrics ${ \bf d } _ { \mathsf { X } }$ and $\mathrm { d } _ { \mathrm { Y } }$ , respectively, and let $Z \subset \mathbf { X } \times \mathsf { Y }$ be bounded and set $\kappa : = \operatorname* { s u p } _ { ( x , \bar { x } ) , ( y , \bar { y } ) \in \mathbb { Z } } ( \mathrm { d } _ { \mathsf { X } } ( x , \bar { x } ) { + } \mathrm { d } _ { \mathsf { Y } } ( y , \bar { y } ) )$ . Consider the metric $$ \mathtt { d } _ { \overline { { Z } } } ( z , \bar { z } ) : = \frac { \mathtt { d } _ { \mathsf { X } } ( x , \bar { x } ) + \mathtt { d } _ { \mathsf { Y } } ( y , \bar { y } ) } { \kappa } , \qquad z = ( x , y ) , \bar { z } = ( \bar { x } , \bar { y } ) \in \mathsf { Z } . $$ Next, let $f \colon \operatorname { p r } _ { \mathsf { X } } ( Z ) \times \Theta \operatorname { p r } _ { \mathsf { X } } ( Z )$ be measurable function satisfying $$ \mathrm { d } _ { \mathsf { X } } ( f ( x , \theta ) , f ( { \bar { x } } , \theta ) ) _ { \mathsf { X } } \leq \ell ( \theta ) \mathrm { d } _ { \mathsf { X } } ( x , { \bar { x } } ) $$ for some measurable $\ell : \Theta \to [ 0 , \infty )$ with $\ell _ { f } : = \mathbb { E } [ \ell ( \vartheta _ { 1 } ) ] < \infty$ . Let $h _ { 0 } \colon \mathrm { p r } _ { \mathsf { X } } ( Z ) \to \mathrm { p r } _ { \mathsf { Y } } ( Z )$ be Lipschitz, and define $$ F ( z , \theta ) : = ( f ( x , \theta ) , h _ { 0 } ( f ( x , \theta ) ) ) , \qquad z = ( x , y ) \in \mathsf { Z } . $$ Assume that $\ell _ { f } ( 1 + \mathrm { L i p } ( h _ { 0 } ) ) < \kappa$ . It follows directly that $$ \ell _ { F } \leq \frac { \ell _ { f } ( 1 + \mathrm { L i p } ( h _ { 0 } ) ) } { \kappa } , $$ which ensures that assumption (A1) is satisfied. As in the previous example, assumption (A2) holds under mild assumptions, such as when $\mathcal { L }$ is Lipschitz, $\mathcal { H }$ consists of Lipschitz functions satisfying $\operatorname* { s u p } _ { h \in { \mathcal { H } } } \operatorname { L i p } ( h ) < \infty$ , and the function $( z , h ) \mapsto \mathcal { L } _ { h } ( z )$ is bounded. □ In the following example, we consider an iterated random function that is neither irreducible nor aperiodic. Example 2.4. Let $\{ Z _ { n } \} _ { n \geq 0 }$ be an iterated random function from Example 2.3 in which the function $f ( x , \theta )$ does not depend on $\theta$ , i.e., $f ( x , \theta ) = f ( x )$ for some $f \colon \mathrm { p r } _ { \times } ( Z ) \mathrm { p r } _ { \times } ( Z )$ . In this case, it is straightforward to verify that the unique invariant probability measure is given by $\pi ( \mathrm { d } z ) = \delta _ { z _ { 0 } } ( \mathrm { d } z )$ , where $z _ { 0 } = ( x _ { 0 } , y _ { 0 } ) \in Z$ is the unique solution to $z _ { 0 } = ( f ( x _ { 0 } ) ) , h _ { 0 } ( f ( x _ { 0 } ) )$ . Moreover, one can easily construct concrete examples of such Markov chains (e.g., in $\mathbb { R } ^ { 2 }$ ) that satisfy the given assumptions but are neither irreducible nor aperiodic. Furthermore, their $n$ -step transition functions do not necessarily converge to $\pi ( \mathrm { d } z )$ in the total variation distance. □ For further examples of iterated random functions satisfying the conditions of Theorem 2.1, we refer the reader to [8]. # 3. LITERATURE REVIEW Our work contributes to the understanding of the statistical properties of supervised machine learning problems. Much of the existing literature focuses on PAC learnability under the assumption that the training dataset is drawn from an i.i.d. sample, as discussed in the classical monographs [2], [25] and [29]. However, many practical supervised learning problems—such as speech recognition, object tracking, and market prediction—exhibit temporal dependence and strong correlations within the data-generating process, making the i.i.d. assumption questionable. The first study addressing PAC learnability in such settings appeared in [32], while consistency was examined in [21], [23], and [34], within the framework of stationary mixing time-series processes. Further results in this context can be found in [4] and [12]. In [14] and [31], the authors relaxed the stationarity assumption, requiring only that the data-generating process satisfies a specific law of large numbers in the former work and certain mixing properties in the latter. These approaches encompass (non-)stationary mixing time-series processes as well as irreducible and aperiodic ergodic Markov chains. The PAC learnability of this model was further explored in a series of works [10], [35], [36], [37] and [38]. A broader generalization, considering PAC learnability for not necessarily irreducible and aperiodic ergodic Markov chains, was proposed in [28]. Related results have also been studied in the context of general concentration inequalities. Classical references include [5], [9] and [20]. By employing coupling techniques and imposing constraints on the coupling time, analogous concentration inequalities for general coordinatewise Lipschitz functions evaluated along the sample path of a stationary ergodic Markov chain were established in [6] and [26]. For the related works see also the references therein. By forgoing explicit assumptions of stationarity, irreducibility, and aperiodicity, and instead utilizing martingale techniques, similar results have been derived in [15] and [16]. A key property in these studies is the uniform contractivity of the transition function with respect to the total variation distance, a condition known as the Markov-Dobrushin condition (see, e.g., [13, Theorem 4.1] or [17, Chapter 3]). This condition encapsulates stationarity, irreducibility, and aperiodicity of the underlying Markov chain. Across these studies, the obtained learning rates are either data-distribution-independent (in PAC or concentration inequality results) or remain unknown (in consistency results). Datadistribution-dependent (or consistency) learning rates for i.i.d. samples are provided, for example, in [25] and [29]. The first results on consistency learning rates in non-i.i.d. settings appeared in [24], where the authors considered a class of stationary mixing time-series processes, expressing data-distribution dependence in terms of the Rademacher complexities of the underlying hypothesis class. This result was later extended to empirical processes of stationary mixing time-series in [11] and to non-stationary mixing time-series processes in [18] and [19]. The only related result in the context of Markov chains is found in [3], where the authors establish bounds on the Rademacher complexities of a class of Vapnik-Chervonenkis type functions and derive consistency learning rates (in terms of Rademacher complexities) for irreducible and aperiodic ergodic Markov chains admitting an atom. In contrast, in this work, we do not assume irreducibility, aperiodicity, or an atomic structure in the underlying Markov chain (iterated random function). The ergodic properties of iterated random functions—specifically, stationarity and convergence to the corresponding invariant probability measure—were first studied in [8]. Based on the contractivity nature and Lipschitz continuity of the governing function, as assumed in this paper (see conditions (A1) and (A2)), [7] derived bounds on various concentration inequalities. These results were further applied to bound the Wasserstein distance between the empirical and invariant distributions of the iterated random function. A key distinction between their work and ours is that [7] assumes Lipschitz continuity in the second variable of the governing function—a requirement that excludes important examples, such as the one in Example 2.2. Finally, by replacing the governing function $F ( z , \theta )$ with a sequence of functions $\{ F _ { n } ( z , \theta ) \} _ { n \ge 0 }$ satisfying the same contractivity and Lipschitz conditions (in both variables), the results of [7] were extended in [1] to a class of time-inhomogeneous iterated random functions. # 4. PROOF OF THEOREM 2.1 In this section, we establish the proof of Theorem 2.1. We begin by deriving a uniform convergence result for the associated sample error. Lemma 4.1. Assume $( A I )$ and (A2). For any $\mu \in \mathcal { P } ( Z )$ and $ { \varepsilon } \in ( 0 , 1 )$ , the following inequality holds: $$ \mathbb { P } ^ { \scriptscriptstyle { \mathrm { u } } } \left( \operatorname* { s u p } _ { h \in \mathcal { H } } \vert \hat { \mathrm { e r } } _ { n , 2 n } ( h ) - \mathrm { e r } _ { \pi } ( h ) \vert \ge \mathbb { E } ^ { \scriptscriptstyle { \mathrm { u } } } \big [ \operatorname* { s u p } _ { h \in \mathcal { H } } \vert \hat { \mathrm { e r } } _ { n , 2 n } ( h ) - \mathrm { e r } _ { \pi } ( h ) \vert \big ] + \varepsilon \right) \le \mathrm { e } ^ { - 2 \varepsilon ^ { 2 } n / ( \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) ) ^ { 2 } } . $$ Proof. Fix $n \in \mathbb { N }$ , and define the function $\varphi : Z ^ { n } \to \mathbb { R }$ as $$ \varphi ( z _ { 1 } , \ldots , z _ { n } ) : = \operatorname* { s u p } _ { h \in { \mathcal { H } } } \left. { \frac { 1 } { n } } \sum _ { i = 1 } ^ { n } { \mathcal { L } } _ { h } ( z _ { i } ) - \operatorname { e r } _ { \pi } ( h ) \right. . $$ Observe that $$ \varphi ( z _ { 1 } , \dots , z _ { n } ) \leq \operatorname* { s u p } _ { h \in \mathcal { H } } \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \left( \int _ { \mathbb { Z } } | \mathcal { L } _ { h } ( z _ { i } ) - \mathcal { L } _ { h } ( z ) | \pi ( \mathrm { d } z ) \right) \leq \ell _ { \mathcal { H } } . $$ Additionally, for any $( z _ { 1 } , \ldots , z _ { n } ) , ( { \bar { z } } _ { 1 } , \ldots , { \bar { z } } _ { n } ) \in Z ^ { n }$ , $$ \begin{array} { r l } { | \varphi ( z _ { 1 } , \dots , z _ { n } ) - \varphi ( \bar { z } _ { 1 } , \dots , \bar { z } _ { n } ) | \le \displaystyle \operatorname* { s u p } _ { h \in \mathcal { H } } \| \displaystyle \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \mathcal { L } _ { h } ( z _ { i } ) - \mathrm { e r } _ { \pi } ( h ) \| - \displaystyle | \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \mathcal { L } _ { h } ( \bar { z } _ { i } ) - \mathrm { e r } _ { \pi } ( h ) | \| } & { } \\ { \le \displaystyle \operatorname* { s u p } _ { h \in \mathcal { H } } \displaystyle \frac { 1 } { n } \sum _ { i = 1 } ^ { n } | \mathcal { L } _ { h } ( z _ { i } ) - \mathcal { L } _ { h } ( \bar { z } _ { i } ) | . } \end{array} $$ Moreover, we have $\varphi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) = \operatorname* { s u p } _ { h \in { \mathcal { H } } } | { \hat { \mathbf { e r } } } _ { n , 2 n } ( h ) - { \mathbf { e r } } _ { \pi } ( h ) |$ . Due to boundedness, the conditional expectation $\mathbb { E } ^ { \mathsf { H } } [ \varphi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) | Z _ { n } , \ldots , Z _ { i } ]$ is well de|fined for all $\mu \in \mathcal { P } ( Z )$ and $i = n , \ldots , 2 n - 1$ . Define the functions $f _ { i } : Z ^ { i - n + 1 } \to \mathbb { R }$ by $$ f _ { i } ( Z _ { n } , \ldots , Z _ { i } ) : = \mathbb { E } ^ { \sharp } [ \varphi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) | Z _ { n } , \ldots , Z _ { i } ] . $$ We now show that for all $i \in n , \ldots , 2 n - 1$ , $$ \begin{array} { r l } & { | f _ { i } ( z _ { 1 } , \dots , z _ { i - n + 1 } ) - f _ { i } ( \overline { { z } } _ { 1 } , \dots , \overline { { z } } _ { i - n + 1 } ) | } \\ & { \leq \displaystyle \frac { 1 } { n } \int _ { z } \cdots \int _ { z } \operatorname* { s u p } _ { h \in \mathcal { H } } \Bigg ( \sum _ { j = 1 } ^ { i - n + 1 } | \mathcal { L } _ { h } ( z _ { j } ) - \mathcal { L } _ { h } ( \overline { { z } } _ { j } ) | } \\ & { \qquad + \displaystyle \sum _ { j = 1 } ^ { 2 n - i - 1 } | \mathcal { L } _ { h } ( F ^ { j } ( z _ { i } , y _ { j } ) ) - \mathcal { L } _ { h } ( F ^ { j } ( \overline { { z } } _ { i } , y _ { j } ) ) | \Bigg ) \mathbb { P } _ { \vartheta _ { 2 n - i - 1 } } ( \mathrm { d } y _ { 2 n - i - 1 } ) \cdots \mathbb { P } _ { \vartheta _ { 1 } } ( \mathrm { d } y _ { 1 } ) , } \end{array} $$ where $$ F ^ { 1 } ( z _ { i } , y _ { 1 } ) = F ( z _ { i } , y _ { 1 } ) \qquad { \mathrm { a n d } } \qquad F ^ { j } ( z _ { i } , y _ { j } ) = F ( F ^ { j - 1 } ( z _ { i } , y _ { j - 1 } ) , y _ { j } ) . $$ We prove this claim by induction. For $i = 2 n - 1$ , from eq. (4.1), we have that $$ \begin{array} { r l r } & { } & { | f _ { 2 n - 1 } ( z _ { 1 } , \dots , z _ { n } ) - f _ { 2 n - 1 } ( \bar { z } _ { 1 } , \dots , \bar { z } _ { n } ) | = | \varphi ( z _ { 1 } , \dots , z _ { n } ) - \varphi ( \bar { z } _ { 1 } , \dots , \bar { z } _ { n } ) | } \\ & { } & { \qquad \leq \underset { h \in \mathcal { H } } { \operatorname* { s u p } } \frac { 1 } { n } \displaystyle \sum _ { i = 1 } ^ { n } | \mathcal { L } _ { h } ( z _ { i } ) - \mathcal { L } _ { h } ( \bar { z } _ { i } ) | . } \end{array} $$ Assuming eq. (4.2) holds for some $i \in \{ n + 1 , \ldots , 2 n - 1 \}$ , we establish it for $i - 1$ . Using conditional expectation properties, we obtain $$ \begin{array} { r l } & { f _ { i - 1 } ( z _ { 1 } , \dots , z _ { i - n } ) - f _ { j - 1 } ( \bar { z } _ { 1 } , \dots , \bar { z } _ { i - n } ) | } \\ & { = | \mathbb { E } ^ { \mathfrak { u } } [ f _ { i } ( Z _ { n } , \dots , Z _ { i } ) ] Z _ { n } = z _ { 1 } , \dots , Z _ { i - 1 } = z _ { i - n } ] - \mathbb { E } ^ { \mathfrak { u } } [ f _ { i } ( Z _ { n } , \dots , Z _ { i } ) ] Z _ { n } = \bar { z } _ { 1 } , \dots , Z _ { i - 1 } = \bar { z } } \\ & { \le \displaystyle \int _ { \mathbb { Z } } | f _ { i } ( z _ { 1 } , \dots , z _ { i - n } , F ( z _ { i - n } , y ) ) - f _ { i } ( \bar { z } _ { 1 } , \dots , \bar { z } _ { i - n } , F ( \bar { z } _ { i - n } , y ) ) | \mathbb { P } _ { \vartheta _ { 1 } } ( \mathrm { d } y ) } \\ & { \le \displaystyle \frac { 1 } { n } \int _ { \mathbb { Z } } \cdots \int _ { \mathbb { Z } ^ { h \in \mathbb { R } } } \left( \displaystyle \sum _ { j = 1 } ^ { i - n } | \mathcal { L } _ { h } ( z _ { j } ) - \mathcal { L } _ { h } ( \bar { z } _ { j } ) | \right. } \\ & { \qquad + \displaystyle \left. \sum _ { j = 1 } ^ { 2 n - i } | \mathcal { L } _ { h } ( F ^ { j } ( z _ { i - n } , y _ { j } ) ) - \mathcal { L } _ { h } ( F ^ { j } ( \bar { z } _ { i - n } , y _ { j } ) ) | \right) \mathbb { P } _ { \vartheta _ { 2 n - i } } ( \mathrm { d } y _ { 2 n - i } ) \cdots \mathbb { P } _ { \vartheta _ { 1 } } ( \mathrm { d } y ) } \end{array} $$ which completes the proof of eq. (4.2). Next, define $$ \begin{array} { r l } & { V _ { i } : = f _ { i } ( Z _ { n } , \dots , Z _ { i } ) - f _ { i - 1 } ( Z _ { n } , \dots , Z _ { i - 1 } ) } \\ & { L _ { i } : = \underset { z _ { i } \in \mathbb { Z } } { \operatorname* { i n f } } f _ { i } ( Z _ { n } , \dots , Z _ { i - 1 } , z _ { i } ) - f _ { i - 1 } ( Z _ { n } , \dots , Z _ { i - 1 } ) } \\ & { R _ { i } : = \underset { z _ { i } \in \mathbb { Z } } { \operatorname* { s u p } } f _ { i } ( Z _ { n } , \dots , Z _ { i - 1 } , z _ { i } ) - f _ { i - 1 } ( Z _ { n } , \dots , Z _ { i - 1 } ) , } \end{array} $$ with $$ \begin{array} { r l } & { V _ { n } : = f _ { n } ( Z _ { n } ) - \mathbb { E } ^ { \mathsf { \# } } [ \varphi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) ] } \\ & { L _ { n } : = \underset { z _ { 1 } \in { \mathsf { Z } } } { \operatorname* { i n f } } f _ { n } ( z _ { 1 } ) - \mathbb { E } ^ { \mathsf { \# } } [ \varphi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) ] } \\ & { R _ { n } : = \underset { z _ { 1 } \in { \mathsf { Z } } } { \operatorname* { s u p } } f _ { n } ( z _ { 1 } ) - \mathbb { E } ^ { \mathsf { \# } } [ \varphi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) ] . } \end{array} $$ It follows that, $L _ { i } \leq V _ { i } \leq R _ { i }$ , $\mathbb { E } ^ { \mathsf { H } } [ V _ { i } | Z _ { n } , \dots , Z _ { i - 1 } ] = 0 , \mathbb { E } ^ { \mathsf { H } } [ V _ { n } ] = 0 ,$ $$ \sum _ { i = n } ^ { 2 n - 1 } V _ { i } = \varphi ( Z _ { n } , \dots , Z _ { 2 n - 1 } ) - \mathbb { E } ^ { \mathsf { u } } [ \varphi ( Z _ { n } , \dots , Z _ { 2 n - 1 } ) ] $$ and $$ R _ { i } - L _ { i } = \operatorname* { s u p } _ { z _ { i } \in \mathbb { Z } } f _ { i } ( Z _ { n } , \ldots , Z _ { i - 1 } , z _ { i } ) - { \underset { { \bar { z } } _ { i } \in \mathbb { Z } } { \operatorname* { i n f } } } \ f _ { i } ( Z _ { n } , \ldots , Z _ { i - 1 } , z _ { i } ) $$ $$ \begin{array} { r l } & { \quad = \underset { z _ { i } , z _ { i } \geq 2 } { \operatorname* { s u p } } ( f _ { i } ( Z _ { n } , \dotsc , Z _ { i - 1 } , z _ { i } ) - f _ { i } ( Z _ { n } , \dotsc , Z _ { i - 1 } , \bar { z } _ { i } ) ) } \\ & { \quad \leq \frac { 1 } { n } \underset { z _ { i } , z _ { i } \geq 2 } { \operatorname* { s u p } } \int _ { \mathbb { Z } } \cdots \int _ { \mathbb { Z } } \underset { n \in \mathcal { W } } { \operatorname* { s u p } } \Bigg ( | \mathcal { E } _ { h } ( z _ { i } ) - \mathcal { L } _ { h } ( \bar { z } _ { i } ) | } \\ & { \qquad \quad + \underset { j = 1 } { \operatorname* { s u p } } 1 \mathcal { E } _ { h } ( F ^ { j } ( z _ { i } , y _ { j } ) ) - \mathcal { L } _ { h } ( F ^ { j } ( \bar { z } _ { i } , y _ { j } ) ) \Bigg ) \mathbb { P } _ { \partial _ { 2 n - i - 1 } } ( \mathrm { d } y _ { 2 n - i - 1 } ) \cdots \mathbb { P } _ { \beta _ { 1 } } ( \mathrm { d } y _ { 1 } ) } \\ & { \quad \leq \frac { \ell _ { \mathcal { H } } ( 1 + \ell _ { F } + \cdots + \ell _ { F } ^ { 2 n - i - 1 } ) } { n } \underset { z _ { i } , z _ { i } \in \mathbb { Z } } { \operatorname* { s u p } } \mathrm { d } _ { 2 } ( z , \bar { z } ) } \end{array} $$ In particular, $L _ { i } \le V _ { i } \le L _ { i } + ( \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) ) / n$ . For all $s > 0$ , we now have $$ \begin{array} { r l } & { \mathbb { P } ^ { \scriptscriptstyle { \mathsf { u } } } \big ( \varphi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) - \mathbb { E } ^ { \scriptscriptstyle { \mathsf { u } } } [ \varphi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) ] \ge \varepsilon } \\ & { \quad = \mathbb { P } ^ { \scriptscriptstyle { \mathsf { u } } } \left( \begin{array} { l } { 2 n - 1 } \\ { \vdots } \\ { i = n } \end{array} { V } _ { i } \ge \varepsilon \right) } \\ & { \quad = \mathbb { P } ^ { \scriptscriptstyle { \mathsf { u } } } \big ( \mathrm { e } ^ { s \sum _ { i = 1 } ^ { 2 n - 1 } V _ { i } } \ge \mathrm { e } ^ { s \varepsilon } \big ) } \\ & { \quad \le \mathrm { e } ^ { - s \varepsilon } \mathbb { E } ^ { \scriptscriptstyle { \mathsf { u } } } \big [ \mathrm { e } ^ { s \sum _ { i = n } ^ { 2 n - 1 } V _ { i } } \big ] } \\ & { \quad = \mathrm { e } ^ { - s \varepsilon } \mathbb { E } ^ { \scriptscriptstyle { \mathsf { u } } } \left[ \mathrm { e } ^ { s \sum _ { i = n } ^ { 2 n - 2 } V _ { i } } \mathbb { E } ^ { \scriptscriptstyle { \mathsf { u } } } \big [ \mathrm { e } ^ { s V _ { 2 n - 1 } } \big | Z _ { n } , \ldots , Z _ { 2 n - 2 } \big ] \right] . } \end{array} $$ Using a standard concentration argument (see [25, Lemma D.1]), we obtain $$ \begin{array} { r } { { \mathbb E } ^ { \scriptscriptstyle \sharp } \big [ \mathrm { e } ^ { s V _ { 2 n - 1 } } \big | Z _ { 1 } , \ldots , Z _ { 2 n - 2 } \big ] \le \mathrm { e } ^ { s ^ { 2 } ( \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) ) ^ { 2 } / 8 n ^ { 2 } } . } \end{array} $$ Thus, $$ \begin{array} { r } { \mathbb { P } ^ { \scriptscriptstyle \sharp } ( \varphi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) - \mathbb { E } ^ { \scriptscriptstyle \sharp } [ \varphi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) ] \geq \varepsilon ) \leq \mathrm { e } ^ { - s \varepsilon + s ^ { 2 } ( \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) ) ^ { 2 } / 8 n } . } \end{array} $$ Minimizing the function $s \mapsto \mathrm { e } ^ { - s \varepsilon + s ^ { 2 } ( \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) ) ^ { 2 } / 8 n }$ we arrive at $$ \begin{array} { r } { \mathbb { P } ^ { \scriptscriptstyle \sharp } \big ( \varphi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) - \mathbb { E } ^ { \scriptscriptstyle \sharp } [ \varphi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) ] \ge \varepsilon \big ) \le \mathrm { e } ^ { - 2 \varepsilon ^ { 2 n / ( \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) ) ^ { 2 } } } , } \end{array} $$ which completes the proof. In the following lemma, we analyze the term $\mathbb { E } ^ { \mu } [ \varphi ( Z _ { n } , \dots , Z _ { 2 n - 1 } ) ]$ . Lemma 4.2. Assume $( A I )$ and (A2). For any $\mu \in { \mathcal { P } } ( Z ) .$ , the following inequality holds: $$ \begin{array} { r } { \mathbb { E } ^ { \mathsf { H } } [ \varphi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) ] \leq 2 \mathcal { R } _ { n , \pi } ( \mathcal { H } ) + \ell _ { \mathcal { H } } \mathcal { W } ( \mathsf { \mu } ^ { \mathcal { P } ^ { n } } , \pi ) . } \end{array} $$ Proof. We start by rewriting $\mathbb { E } ^ { \mu } [ \varphi ( Z _ { n } , \dots , Z _ { 2 n - 1 } ) ]$ as follows: $$ \begin{array} { r l } & { { \mathbb E } ^ { \mathfrak { u } } [ \varphi ( Z _ { n } , \dots , Z _ { 2 n - 1 } ) ] } \\ & { \ = { \mathbb E } ^ { \pi } [ \varphi ( Z _ { n } , \dots , Z _ { 2 n - 1 } ) ] + { \mathbb E } ^ { \mathfrak { u } } [ \varphi ( Z _ { n } , \dots , Z _ { 2 n - 1 } ) ] - { \mathbb E } ^ { \pi } [ \varphi ( Z _ { n } , \dots , Z _ { 2 n - 1 } ) ] } \\ & { \ = { \mathbb E } ^ { \pi } [ \varphi ( Z _ { 0 } , \dots , Z _ { n - 1 } ) ] + { \mathbb E } ^ { \mathfrak { u } } [ { \mathbb E } ^ { Z _ { n } } [ \varphi ( Z _ { 0 } , \dots , Z _ { n - 1 } ) ] ] - { \mathbb E } ^ { \pi } [ \varphi ( Z _ { 0 } , \dots , Z _ { n - 1 } ) ] } \\ & { \ = { \mathbb E } ^ { \pi } [ \varphi ( Z _ { 0 } , \dots , Z _ { n - 1 } ) ] + \displaystyle \int _ { z } { \mathbb E } ^ { z } [ \varphi ( Z _ { 0 } , \dots , Z _ { n - 1 } ) ] ( \sharp P ^ { n } ( \mathrm { d } z ) - \pi ( \mathrm { d } z ) ) . } \end{array} $$ Next, we establish that the function $z \mapsto \mathbb { E } ^ { z } [ \varphi ( Z _ { 0 } , \ldots , Z _ { n - 1 } ) ]$ is Lipschitz with Lipschitz constant at most $\ell _ { \mathcal { H } }$ . For any $z , { \bar { z } } \in Z$ , using eq. (4.1), we obtain: $$ \begin{array} { r l } & { | \mathbb { E } ^ { z } [ \varphi ( Z _ { 0 } , \dots , Z _ { n - 1 } ) ] - \mathbb { E } ^ { \bar { z } } [ \varphi ( Z _ { 0 } , \dots , Z _ { n - 1 } ) ] | } \\ & { \le \mathbb { E } ^ { n - 1 } [ | \varphi ( z , F ^ { 1 } ( z , \vartheta _ { 1 } ) , \dots , F ^ { n - 1 } ( z , \vartheta _ { n - 1 } ) ) - \varphi ( \bar { z } , F ^ { 1 } ( \bar { z } , \vartheta _ { 1 } ) , \dots , F ^ { n - 1 } ( \bar { z } , \vartheta _ { n - 1 } ) ) | ] } \end{array} $$ $$ \begin{array} { r l } & { \le \mathbb E ^ { n - 1 } \left[ \displaystyle \operatorname* { s u p } _ { h \in \mathcal H } \frac 1 n \sum _ { i = 0 } ^ { n - 1 } | \mathcal L _ { h } ( F ^ { i } ( z , \vartheta _ { i } ) ) - \mathcal L _ { h } ( F ^ { i } ( \bar { z } , \vartheta _ { i } ) ) | \right] } \\ & { \le \displaystyle \frac 1 n \sum _ { i = 0 } ^ { n - 1 } \ell _ { \mathcal H } \ell _ { _ F } ^ { i } \mathsf d _ { \bar { z } } ( z , \bar { z } ) } \\ & { \le \ell _ { \mathcal H } \mathsf d _ { \bar { z } } ( z , \bar { z } ) , } \end{array} $$ where $F ^ { 0 } ( z , \vartheta _ { 0 } ) : = z$ . Applying (A1) and the comment preceding Theorem 1.7 in [27], we conclude that $$ \begin{array} { r } { \mathbb { E } ^ { \mathrm { u } } [ \varphi ( Z _ { n } , \dots , Z _ { 2 n - 1 } ) ] \leq \mathbb { E } ^ { \pi } [ \varphi ( Z _ { 0 } , \dots , Z _ { n - 1 } ) ] + \ell _ { \mathcal { H } } \mathcal { W } ( \mu P ^ { n } , \pi ) . } \end{array} $$ To bound $\mathbb { E } ^ { \pi } [ \varphi ( Z _ { 0 } , \ldots , Z _ { n - 1 } ) ]$ , we observe that $$ \mathbb { E } ^ { \pi } [ \varphi ( Z _ { 0 } , \dots , Z _ { n - 1 } ) ] = \mathbb { E } ^ { \pi } \big [ \operatorname* { s u p } _ { h \in \mathcal { H } } \vert \hat { \mathrm { e r } } _ { n } ( h ) - \mathrm { e r } _ { \pi } ( h ) \vert \big ] = \mathbb { E } ^ { \pi } \big [ \operatorname* { s u p } _ { h \in \mathcal { H } \cup - \mathcal { H } } ( \hat { \mathrm { e r } } _ { n } ( h ) - \mathrm { e r } _ { \pi } ( h ) ) \big ] , $$ where $- \mathcal { H } = \{ - h \ : \ h \in \mathcal { H } \}$ . Using the approach in [25, the proof of Theorem 3.3], we establish that $$ \begin{array} { r } { \mathbb { E } ^ { \pi } [ \varphi ( Z _ { 0 } , \ldots , Z _ { n - 1 } ) ] \leq 2 { \mathcal { R } } _ { n , \pi } ( { \mathcal { H } } \cup - { \mathcal { H } } ) = 2 { \mathcal { R } } _ { n , \pi } ( { \mathcal { H } } ) . } \end{array} $$ This completes the proof. Remark 4.3. It is worth noting that $$ \mathbb { E } ^ { \pi } [ \varphi ( Z _ { 0 } , \ldots , Z _ { n - 1 } ) ] \geq { \frac { 1 } { 2 } } { \mathcal { R } } _ { n , \pi } ( { \mathcal { H } } ) - L _ { { \mathcal { H } } } { \sqrt { \frac { \log 2 } { 2 n } } } . $$ In particular, $\begin{array} { r } { \operatorname* { l i m } _ { n \infty } \mathbb { E } ^ { \pi } [ \varphi ( Z _ { 0 } , \ldots , Z _ { n - 1 } ) ] = 0 } \end{array}$ if, and only if, $\begin{array} { r } { \operatorname* { l i m } _ { n \infty } \mathcal { R } _ { n , \pi } ( \mathcal { H } ) = 0 } \end{array}$ . Namely, let $\{ { \bar { Z } } _ { n } \} _ { n \geq 0 }$ be an independent copy of $\{ Z _ { n } \} _ { n \geq 0 }$ . We have $$ \begin{array} { r l } { \mathcal { R } _ { s , c } ( \mathcal { R } ^ { * } ) - \Gamma ^ { * } \mathtt { S F } ^ { * } } & { \Bigg | \underset { \mathrm { s c } } { \underbrace { \mathrm { i n f } } } \frac { 1 } { n \le t } \sum _ { \sigma } \mathcal { C } _ { s } ( \mathcal { L } _ { s } ( \mathcal { L } _ { s } ) ) \Bigg | } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \end{array} $$ where $\sigma = ( \sigma _ { 0 } , \ldots , \sigma _ { n - 1 } )$ and $\langle \cdot , \cdot \rangle$ stands for the standard scalar product in $\mathbb { R } ^ { n }$ . Applying [25, Theorem 3.7], we conclude th⟨at $$ \mathbb { E } ^ { \sigma } \left[ \operatorname* { s u p } _ { a \in \{ ( - 1 , \ldots , - 1 ) , ( 1 , \ldots , 1 ) \} } \langle \sigma , a \rangle \right] \leq { \sqrt { 2 n \log 2 } } , $$ which proves the claim. From Lemmas 4.1 and 4.2, we conclude that for any $\mu \in \mathcal { P } ( Z )$ and $ { \varepsilon } \in \left( 0 , 1 \right)$ , the following holds: $$ \mathbb { P } ^ { \mu } \left( \operatorname* { s u p } _ { h \in \mathcal { H } } \vert \hat { \mathrm { e r } } _ { n , 2 n } ( h ) - \mathrm { e r } _ { \pi } ( h ) \vert \leq 2 \mathcal { R } _ { n , \pi } ( \mathcal { H } ) + \ell _ { \mathcal { H } } \mathcal { W } ( \mu ^ { p ^ { n } } , \pi ) + \varepsilon \right) \geq 1 - \mathrm { e } ^ { - 2 \varepsilon ^ { 2 } n / ( \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) ) } $$ Equivalently, we have $$ \begin{array} { r l } & { \mathbb { P } ^ { \scriptscriptstyle \Psi } \left( \underset { h \in \mathcal { H } } { \operatorname* { s u p } } \left| \hat { \mathrm { e r } } _ { n , 2 n } ( h ) - \mathrm { e r } _ { \pi } ( h ) \right| \leq 2 \mathcal { R } _ { n , \pi } ( \mathcal { H } ) + \ell _ { \mathcal { H } } \ell _ { _ { F } } ^ { n } \mathcal { W } ( \mu , \pi ) + \varepsilon \right) } \\ & { \geq 1 - \mathrm { e } ^ { - 2 \varepsilon ^ { 2 } n / ( \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) ) ^ { 2 } } . } \end{array} $$ In the following lemma, we derive an analogous result in terms of the $n$ -empirical Rademacher complexity. Lemma 4.4. Assume $( A I )$ and (A2). Then, for any $ { \varepsilon } \in \left( 0 , 1 \right)$ , we have $$ \mathbb { P } ^ { \pi } \left( \operatorname* { s u p } _ { h \in \mathcal { H } } \vert \hat { \mathrm { e r } } _ { n , 2 n } ( h ) - \mathrm { e r } _ { \pi } ( h ) \vert \leq 2 \hat { { \mathcal { R } } } _ { n , ( Z _ { 0 } , \ldots , Z _ { n - 1 } ) } ( \mathcal { H } ) + 3 \varepsilon \right) \geq 1 - { \mathrm { e } } ^ { - 2 \varepsilon ^ { 2 } n / ( \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) ) ^ { 2 } } . $$ Proof. From eq. (4.3), for any $\delta \in ( 0 , 1 )$ , we obtain $$ \mathbb { P } ^ { \pi } \Bigg ( \operatorname* { s u p } _ { h \in \mathcal { H } } \vert \hat { \mathbf { c } } \mathbf { \hat { r } } _ { n , 2 n } ( h ) - \mathbf { c r } _ { \pi } ( h ) \vert \leq 2 \mathcal { R } _ { n , \pi } ( \mathcal { H } ) + ( \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) ) \sqrt { \frac { \log ( 2 / \delta ) } { 2 n } } \Bigg ) \geq 1 - \delta / 2 . $$ Next, let $\phi : Z ^ { n } \to \mathbb { R }$ be defined as $\phi ( z ) : = - \hat { \mathcal { R } } _ { n , z } ( \mathcal { H } )$ . According to assumption (A2), we have $| \phi ( z ) | \leq \ell _ { \mathcal { H } }$ for all $z \in { \cal Z }$ . Consequently, for each $i = n , \ldots , 2 n - 1$ , the conditional expectation| $\mathbb { E } ^ { \pi } [ \phi ( Z _ { n } , \dots , Z _ { 2 n - 1 } ) | Z _ { n } , \dots , Z _ { i } ]$ is well defined, and $\mathbb { E } ^ { \pi } [ \phi ( Z _ { 0 } , \ldots , Z _ { n - 1 } ) ] = - \mathcal { R } _ { n , \pi } ( \mathcal { H } )$ Furthermore, for any $z , { \bar { z } } \in Z ^ { n }$ , we obtain $$ \begin{array} { r l } { | \phi ( z _ { 1 } , \dots , z _ { n } ) - \phi ( \bar { z } _ { 1 } , \dots , \bar { z } _ { n } ) | = \displaystyle \frac { 1 } { n } \left| \mathbb { E } ^ { \sigma } \left[ \displaystyle \operatorname* { s u p } _ { h \in \mathcal { H } } \displaystyle \sum _ { i = 1 } ^ { n } \sigma _ { i } \mathcal { L } _ { h } ( z _ { i } ) - \displaystyle \operatorname* { s u p } _ { h \in \mathcal { H } } \displaystyle \sum _ { i = 1 } ^ { n } \sigma _ { i } \mathcal { L } _ { h } ( \bar { z } _ { i } ) \right] \right| } & { } \\ { \displaystyle \leq \displaystyle \frac { 1 } { n } \operatorname* { s u p } _ { h \in \mathcal { H } } \displaystyle \sum _ { i = 1 } ^ { n } | \mathcal { L } _ { h } ( z _ { i } ) - \mathcal { L } _ { h } ( \bar { z } _ { i } ) | . } & { } \end{array} $$ Define the function $f _ { i } : Z ^ { i - n + 1 } \to \mathbb { R }$ by $$ f _ { i } ( Z _ { n } , \ldots , Z _ { i } ) : = \mathbb { E } ^ { \mathsf { u } } [ \phi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) | Z _ { n } , \ldots , Z _ { i } ] . $$ Proceeding analogously to Lemma 4.1, we obtain $$ \begin{array} { r l } & { f _ { i } ( z _ { 1 } , \dots , z _ { i - n + 1 } ) - f _ { i } ( \bar { z } _ { 1 } , \dots , \bar { z } _ { i - n + 1 } ) | } \\ & { \leq \displaystyle \frac { 1 } { n } \int _ { z } \cdots \int _ { z h \in \mathcal { W } } \left( \sum _ { j = 1 } ^ { i - n + 1 } | \mathcal { E } _ { h } ( z _ { j } ) - \mathcal { E } _ { h } ( \bar { z } _ { j } ) | \right. } \\ & { \qquad + \left. \sum _ { j = 1 } ^ { 2 n - i - 1 } | \mathcal { E } _ { h } ( F ^ { j } ( z _ { i } , y _ { j } ) ) - \mathcal { E } _ { h } ( F ^ { j } ( \bar { z } _ { i } , y _ { j } ) ) | \right) \mathbb { P } _ { \vartheta _ { 2 n - i - 1 } } ( \mathsf { d } y _ { 2 n - i - 1 } ) \cdots \mathbb { P } _ { \vartheta _ { 1 } } ( \mathsf { d } y _ { 1 } ) , } \end{array} $$ where $$ F ^ { 1 } ( z _ { i } , y _ { 1 } ) = F ( z _ { i } , y _ { 1 } ) \qquad { \mathrm { a n d } } \qquad F ^ { j } ( z _ { i } , y _ { j } ) = F ( F ^ { j - 1 } ( z _ { i } , y _ { j - 1 } ) , y _ { j } ) . $$ Next, define $$ \begin{array} { r l } & { V _ { i } : = f _ { i } ( Z _ { n } , \dots , Z _ { i } ) - f _ { i - 1 } ( Z _ { n } , \dots , Z _ { i - 1 } ) } \\ & { L _ { i } : = \underset { z _ { i } \in \mathbb { Z } } { \operatorname* { i n f } } f _ { i } ( Z _ { n } , \dots , Z _ { i - 1 } , z _ { i } ) - f _ { i - 1 } ( Z _ { n } , \dots , Z _ { i - 1 } ) } \\ & { R _ { i } : = \underset { z _ { i } \in z } { \operatorname* { s u p } } f _ { i } ( Z _ { n } , \dots , Z _ { i - 1 } , z _ { i } ) - f _ { i - 1 } ( Z _ { n } , \dots , Z _ { i - 1 } ) , } \end{array} $$ with $$ \begin{array} { r l } & { V _ { n } : = f _ { n } ( Z _ { n } ) - \mathbb { E } ^ { \pi } [ \phi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) ] } \\ & { L _ { n } : = \underset { z _ { 1 } \in { \cal Z } } { \operatorname* { i n f } } f _ { n } ( z _ { 1 } ) - \mathbb { E } ^ { \pi } [ \phi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) ] } \\ & { R _ { n } : = \underset { z _ { 1 } \in { \cal Z } } { \operatorname* { s u p } } f _ { n } ( z _ { 1 } ) - \mathbb { E } ^ { \pi } [ \phi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) ] . } \end{array} $$ It follows that, as in Lemma 4.1, $L _ { i } \leq V _ { i } \leq R _ { i }$ , $\mathbb { E } ^ { \pi } [ V _ { i } | Z _ { n } , \dots , Z _ { i - 1 } ] = 0 , \mathbb { E } ^ { \pi } [ V _ { n } ] = 0$ , $$ \sum _ { i = n } ^ { 2 n - 1 } V _ { i } = \phi ( Z _ { n } , \dots , Z _ { 2 n - 1 } ) - \mathbb { E } ^ { \pi } [ \phi ( Z _ { n } , \dots , Z _ { 2 n - 1 } ) ] , $$ $$ R _ { i } - L _ { i } \leq \frac { \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) } { n } , $$ (i.e., $L _ { i } \le V _ { i } \le L _ { i } + ( \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) ) / n )$ , and $$ \begin{array} { r l } { \mathbb { P } ^ { \pi } \big ( \mathcal R _ { n , \pi } ( \mathcal H ) - \widehat { \mathcal R } _ { n , ( Z _ { 0 } , \ldots , Z _ { n - 1 } ) } ( \mathcal H ) \geq \varepsilon \big ) = \mathbb { P } ^ { \pi } \big ( \phi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) - \mathbb E ^ { \pi } [ \phi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) ] \geq } & { } \\ { \quad } & { = \mathbb { P } ^ { \pi } \left( \displaystyle \sum _ { i = n } ^ { 2 n - 1 } V _ { i } \geq \varepsilon \right) } \\ { \quad } & { \leq \mathrm e ^ { - 2 \varepsilon ^ { 2 } n / ( \ell _ { \mathcal H } / ( 1 - \ell _ { F } ) ) ^ { 2 } } , } \end{array} $$ i.e., $$ \mathbb { P } ^ { \pi } \left( \mathcal { R } _ { n , \pi } ( \mathcal { H } ) \leq \hat { \mathcal { R } } _ { n , ( Z _ { 0 } , \ldots , Z _ { n - 1 } ) } ( \mathcal { H } ) + ( \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) ) \sqrt { \frac { \log ( 2 / \delta ) } { 2 n } } \right) \geq 1 - \delta / 2 . $$ Combining this with eq. (4.4) yields the desired result. Finally, we prove Theorem 2.1. Proof of Theorem 2.1. Fix $ { \varepsilon } \in \left( 0 , 1 \right)$ , and let $\mathcal { A } ^ { \varepsilon }$ be the $\varepsilon$ -ERM algorithm for $\mathcal { H }$ . From eq. (4.3), it follows that, with probability at least $1 - 2 \mathrm { e } ^ { - 2 \varepsilon ^ { 2 } n / ( \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) ) ^ { 2 } }$ , we have that $$ \begin{array} { r l } & { \operatorname { e r } _ { \pi } ( \mathcal { A } ^ { \varepsilon } ( Z _ { 0 } , \dots , Z _ { n - 1 } ) ) \leq \hat { \operatorname { e r } } _ { n , 2 n } ( \mathcal { A } ^ { \varepsilon } ( Z _ { 0 } , \dots , Z _ { n - 1 } ) ) + 2 \mathcal { R } _ { n , \pi } ( \mathcal { H } ) + \ell _ { \mathcal { H } } \ell _ { F } ^ { n } \mathcal { W } ( \{ \mu , \pi \} + \varepsilon } \\ & { \qquad \leq \underset { h \in \mathcal { H } } { \operatorname { i n f } } \ \hat { \operatorname { e r } } _ { n , 2 n } ( h ) + 2 \mathcal { R } _ { n , \pi } ( \mathcal { H } ) + \ell _ { \mathcal { H } } \ell _ { F } ^ { n } \mathcal { W } ( \{ \mu , \pi \} + 2 \varepsilon } \\ & { \qquad \leq \hat { \operatorname { e r } } _ { n , 2 n } ( \bar { h } ) + 2 \mathcal { R } _ { n , \pi } ( \mathcal { H } ) + \ell _ { \mathcal { H } } \ell _ { F } ^ { n } \mathcal { W } ( \{ \mu , \pi \} + 2 \varepsilon } \\ & { \qquad \leq \operatorname { e r } _ { \pi } ( \bar { h } ) + 4 \mathcal { R } _ { n , \pi } ( \mathcal { H } ) + 2 \ell _ { \mathcal { H } } \ell _ { F } ^ { n } \mathcal { W } ( \{ \mu , \pi \} + 3 \varepsilon } \\ & { \qquad \leq \operatorname { o p t } _ { \pi } ( \mathcal { H } ) + 4 \mathcal { R } _ { n , \pi } ( \mathcal { H } ) + 2 \ell _ { \mathcal { H } } \ell _ { F } ^ { n } \mathcal { W } ( \{ \mu , \pi \} + 4 \varepsilon , } \end{array} $$ where $\bar { h } \in \mathcal H$ satisfies $\mathrm { e r } _ { \pi } ( \bar { h } ) \leq \mathrm { o p t } _ { \pi } ( \mathcal H ) + \varepsilon$ . The second claim follows analogously by applying Lemma 4.4. Remark 4.5. Recall that $$ \operatorname* { s u p } _ { z _ { 1 } , \dots , z _ { n } \in Z } \varphi ( z _ { 1 } , \dots , z _ { n } ) \leq \ell _ { \mathcal { H } } . $$ Consequently, we obtain $$ \int _ { \mathbb { Z } } \mathbb { E } ^ { z } [ \varphi ( Z _ { 0 } , \dots , Z _ { n - 1 } ) ] ( \mu ^ { p ^ { n } } ( \mathrm { d } z ) - \pi ( \mathrm { d } z ) ) \leq \ell _ { \mathcal { H } } \Vert \mu ^ { p ^ { n } } - \pi \Vert _ { \mathrm { T V } } , $$ where $$ \| \boldsymbol { \mathfrak { n } } \| _ { \mathrm { T V } } : = \frac { 1 } { 2 } \operatorname* { s u p } _ { f : Z \mathbb { R } , \ \| f \| _ { \infty } \leq 1 } | \boldsymbol { \mathfrak { n } } ( f ) | $$ denotes the total variation norm of a signed measure $\boldsymbol { \eta } ( \mathrm { d } \boldsymbol { z } )$ on $z$ . Using this, the conclusion of Lemma 4.2 can be expressed as $$ \mathbb { E } ^ { \sharp } [ \varphi ( Z _ { n } , \ldots , Z _ { 2 n - 1 } ) ] \leq 2 { \mathcal { R } } _ { n , \pi } ( { \mathcal { H } } ) + \ell _ { { \mathcal { H } } } \| \mu \mathcal { P } ^ { n } - \pi \| _ { \mathrm { T V } } . $$ Combining this with Lemma 4.1, we obtain an alternative fo‖rmulation‖of the first assertion in Theorem 2.1: $$ \begin{array} { r l } & { \mathbb { P } ^ { \scriptscriptstyle \sharp } \left( | \mathrm { e r } _ { \pi } \big ( \mathcal { A } ^ { \varepsilon } ( Z _ { 0 } , \ldots , Z _ { n - 1 } ) \big ) - \mathrm { o p t } _ { \pi } ( \mathcal { H } ) | < 4 \mathcal { R } _ { n , \pi } ( \mathcal { H } ) + 2 \ell _ { \mathcal { H } } \| \mu \mathcal { P } ^ { n } - \pi \| _ { \mathrm { T V } } + 4 \varepsilon \right) } \\ & { \ \geq 1 - 2 \mathrm { e } ^ { - 2 \varepsilon ^ { 2 } n / ( \ell _ { \mathcal { H } } / ( 1 - \ell _ { F } ) ) ^ { 2 } } . } \end{array} $$ However, by [33, Theorem 6.15], the Wasserstein distance satisfies $$ \mathcal { W } ( \mu , \nu ) \leq \left. \mu - \nu \right. _ { \mathrm { T V } } \qquad \forall \mu , \nu \in \mathcal { P } ( Z ) . $$ In general, to ensure that $\| \mu \mathcal { P } ^ { n } - \pi \| _ { \mathrm { T V } }$ ‖conver‖ges to zero as $n \infty$ , it is necessary that $\{ Z _ { n } \} _ { n \geq 0 }$ is irreducible and aperiod‖ic (see, e.‖g., [22]). Thus, in this context, it is more natural and general to present the results in terms of the Wasserstein distance, as illustrated in Example 2.4. # ACKNOWLEDGEMENTS The author is grateful for the insightful and constructive comments received from two anonymous referees and the associate editor, which have led to improvements in the article. Financial support through Croatian Science Foundation under project 2277 is gratefully acknowledged. # CONFLICT OF INTEREST The authors declare that they have no conflict of interest. # REFERENCES [1] P. Alquier, P. Doukhan, and X. Fan. Exponential inequalities for nonstationary Markov chains. Depend. Model., 7(1):150–168, 2019. [2] M. Anthony and P. L. Bartlett. Neural network learning: theoretical foundations. Cambridge University Press, Cambridge, 1999. [3] P. Bertail and F. Portier. Rademacher complexity for Markov chains: applications to kernel smoothing and Metropolis-Hastings. Bernoulli, 25(4B):3912–3938, 2019. [4] D. Bosq. Nonparametric statistics for stochastic processes. Springer-Verlag, New York, 1998. [5] S. Boucheron, G. Lugosi, and P. Massart. Concentration inequalities. Oxford University Press, Oxford, 2013. A nonasymptotic theory of independence, With a foreword by Michel Ledoux. [6] J.-R. Chazottes and F. Redig. Concentration inequalities for Markov processes via coupling. Electron. J. Probab., 14:no. 40, 1162–1180, 2009. [7] J. Dedecker and X. Fan. Deviation inequalities for separately Lipschitz functionals of iterated random functions. Stochastic Process. Appl., 125(1):60–90, 2015. [8] P. Diaconis and D. Freedman. Iterated random functions. SIAM Rev., 41(1):45–76, 1999. [9] D. P. Dubhashi and A. Panconesi. Concentration of measure for the analysis of randomized algorithms. Cambridge University Press, Cambridge, 2009. [10] D. Gamarnik. Extension of the PAC framework to finite and countable Markov chains. IEEE Trans. Inform. Theory, 49(1):338–345, 2003. [11] W. Gao, X.-Y. Niu, and Z.-H. Zhou. Learnability of non-i.i.d. In Asian Conference on Machine Learning, 2016. [12] W. Györfi, L.and Härdle, P. Sarda, and P. Vieu. Nonparametric curve estimation from time series. SpringerVerlag, Berlin, 1989. [13] M. Hairer. Convergence of Markov processes. Lecture notes, University of Warwick. Available at http://www.hairer.org/notes/Convergence.pdf, 2016. [14] A. Irle. On consistency in nonparametric estimation under mixing conditions. J. Multivariate Anal., 60(1):123–147, 1997. [15] A. Kontorovich and M. Raginsky. Concentration of measure without independence: a unified approach via the martingale method. In Convexity and concentration, volume 161 of IMA Vol. Math. Appl., pages 183–210. Springer, New York, 2017. [16] L. Kontorovich and K. Ramanan. Concentration inequalities for dependent random variables via the martingale method. Ann. Probab., 36(6):2126–2158, 2008. [17] A. Kulik. Ergodic behavior of Markov processes. De Gruyter, Berlin, 2018. [18] V. Kuznetsov and M. Mohri. Generalization bounds for time series prediction with non-stationary processes. In International Conference on Algorithmic Learning Theory, 2014. [19] V. Kuznetsov and M. Mohri. Generalization bounds for non-stationary mixing processes. Machine Learning, 106:93–117, 2016. [20] M. Ledoux. The concentration of measure phenomenon, volume 89 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2001. [21] R. Meir. Nonparametric time series prediction through adaptive model selection. Machine Learning, 39(1):5– 34, 2000. [22] S. Meyn and R. L. Tweedie. Markov chains and stochastic stability. Cambridge University Press, Cambridge, second edition, 2009. [23] D. S. Modha and E. Masry. Memory-universal prediction of stationary random processes. IEEE Trans. Inform. Theory, 44(1):117–133, 1998. [24] M. Mohri and A. Rostamizadeh. Rademacher complexity bounds for non-i.i.d. processes. In Neural Information Processing Systems, 2008. [25] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of machine learning. MIT Press, Cambridge, MA, 2018. Second edition. [26] D. Paulin. Concentration inequalities for Markov chains by Marton couplings and spectral methods. Electron. J. Probab., 20, 2015. [27] N. Sandrić. A note on the Birkhoff ergodic theorem. Results Math., 72(1-2):715–730, 2017. [28] N. Sandrić and S. Šebek. Learning from non-irreducible Markov chains. J. Math. Anal. Appl., 523(2):Paper No. 127049, 14, 2023. [29] S. Shalev-Shwartz and S. Ben-David. Understanding Machine Learning - From Theory to Algorithms. Cambridge University Press, 2014. [30] K. Sridharan. Convergence of Markov Processes. Lecture Notes, Cornell University, 2015. Available at https://www.cs.cornell.edu/courses/cs6783/2015fa/. [31] I. Steinwart, D. Hush, and C. Scovel. Learning from dependent observations. J. Multivariate Anal., 100(1):175–194, 2009. [32] M. Vidyasagar. Learning and generalization. Springer-Verlag London, Ltd., London, 2003. [33] C. Villani. Optimal transport. Springer-Verlag, Berlin, 2009. [34] B. Yu. Rates of convergence for empirical processes of stationary mixing sequences. Ann. Probab., 22(1):94– 116, 1994. [35] B. Zou, L. Li, and Z. Xu. Generalization performance of least-square regularized regression algorithm with Markov chain samples. J. Math. Anal. Appl., 388(1):333–343, 2012. [36] B. Zou, L. Li, Z. Xu, T. Luo, and Y. Y. Tang. Generalization performance of Fisher linear discriminant based on Markov sampling. IEEE Trans. Neural. Netw. Learn. Syst., 24(2):288–300, 2013. [37] B. Zou, Z. Xu, and J. Xu. Generalization bounds of ERM algorithm with Markov chain samples. Acta Math. Appl. Sin. Engl. Ser., 30(1):223–238, 2014. [38] B. Zou, H. Zhang, and Z. Xu. Learning from uniformly ergodic Markov chains. J. Complexity, 25(2):188–200, 2009. (Nikola Sandrić) DEPARTMENT OF MATHEMATICS, UNIVERSITY OF ZAGREB, ZAGREB, CROATIA
Most existing literature on supervised machine learning assumes that the training dataset is drawn from an i.i.d. sample. However, many real-world problems exhibit temporal dependence and strong correlations between the marginal distributions of the data-generating process, suggesting that the i.i.d. assumption is often unrealistic. In such cases, models naturally include time-series processes with mixing properties, as well as irreducible and aperiodic ergodic Markov chains. Moreover, the learning rates typically obtained in these settings are independent of the data distribution, which can lead to restrictive choices of hypothesis classes and suboptimal sample complexities for the learning algorithm. In this article, we consider the case where the training dataset is generated by an iterated random function (i.e., an iteratively defined time-homogeneous Markov chain) that is not necessarily irreducible or aperiodic. Under the assumption that the governing function is contractive with respect to its first argument and subject to certain regularity conditions on the hypothesis class, we first establish a uniform convergence result for the corresponding sample error. We then demonstrate the learnability of the approximate empirical risk minimization algorithm and derive its learning rate bound. Both rates are data-distribution dependent, expressed in terms of the Rademacher complexities of the underlying hypothesis class, allowing them to more accurately reflect the properties of the data-generating distribution.
[ "stat.ML", "cs.LG", "math.PR", "68W40, 68T10, 60J05" ]
# 1. Introduction A key challenge in multi-modal human activity understanding tasks, such as human activity recognition (HAR), human pose estimation (HPE), retrieval, or person reidentification (RE-ID) “in the wild” is obtaining paired sensor data for each individual in a multi-person scene (e.g., IMU with human poses, point clouds, or RGB videos). Prior work has studied RGB-IMU matching for identityaware tracking/RE-ID [4, 20], RGB-IMU matching for video retrieval [39], and IMU-Skeleton Pose matching [56] to correct IMU drift in multi-modal HPE. However, existing methods primarily focus on RGB-centric modalities, limiting applicability to privacy-sensitive scenarios like healthcare and surveillance, where RGB cameras may not be able to be deployed. To address privacy concerns, silhouette masks [37, 38] or skeletons [1] have been proposed to anonymize detected individuals from RGB video. While effective, these anonymization techniques still come with the limitation that they require post-processing and short-term storage of the raw RGB data. In contrast, LiDAR is a privacy-preserving alternative, with proven capabilities for multi-modal HAR (e.g., [29, 60]) and HPE (e.g., [24, 47]). However, matching skeleton or IMU signals to LiDAR-based point cloud sequences is underexplored. Beyond matching, recent advances in multi-modal contrastive learning, such as ImageBind [15], IMU2CLIP [39], BabelTower [8], MotionCLIP [53], or LAVIMO [62] have demonstrated the power and benefits of cross-modal alignment for human activity understanding. These models learn a shared embedding space, enabling cross-modality matching, retrieval, and effective neural network pre-training for downstream tasks. Despite their success, they all use RGB data as a main visual modality to bind the learned representations. Extending this line of research, our paper asks the important research question What happens if we only depend on LiDAR in multi-modal contrastive learning as the main visual modality?, which has not been studied before. We present DeSPITE, a Deep Skeleton-PointcloudIMU-Text Embedding model, which is illustrated in Figure 1. Inspired by CLIP [45], ImageBind [15], and IMU2CLIP [39], DeSPITE learns a shared embedding space through large-scale noise contrastive estimation between paired sequences of point cloud $$ skeleton ${ } \mathrm { I M U } { }$ text data. Unlike prior works leveraging frozen text or RGB data embeddings as a binding modality (e.g., IMU2CLIP [39], MotionCLIP [53], LAVIMO [62]), our primary goal is not to demonstrate modality alignment to text. Instead, we Skeleton LiDAR Point Cloud IMU Text 梦冒发 Performing <human activity>. (a) Re-ID / matching DeSPITE (d) X-to-X Retrieval _x育 (b) Temporal Moment Retrieval (c) HAR pre-training 青 0.1 0.1 0.8 <human activity>. <human activity>. present novel applications for point cloud-based human activity sequences that were not possible before, enabled by unifying these modalities into a joint embedding space. Furthermore, while our primary focus is on enabling novel cross-modal retrieval and matching tasks, we find that DeSPITE also serves as an effective pre-training method for HAR, demonstrating improvements over uni-modal baselines. Finally, to understand the contribution of each modality to the joint embedding space, we train several DeSPITE variants (DeSPIE, DeSPE, etc.), evaluating their individual impact on cross-modal matching, retrieval, and HAR performance. As described in Figure 1, after successful alignment in a shared embedding space, DeSPITE and its variants (e.g., DeSPIE) allow novel and very useful applications between point cloud, IMU, and skeleton data: (a) Person Re-ID by matching a snippet of, e.g., IMU data to the correct point cloud snippet, or a skeleton snippet to point cloud sequences, (b) Temporal moment retrieval/search in a video with skeletons, IMU, or a point cloud as query, (c) pre-training modality specific encoders for human activity recognition, and (d) retrieval of each modality through each modality from a large motion database. To train and evaluate DeSPITE, we construct LIPDBabel, a dataset aligning point clouds, IMU, skeletons, and text by integrating the LIPD dataset [47] with the text annotations from Babel [44]. LIPD-Babel enables two key evaluations: (1) LIPD-Babel-v1 for cross-modal matching and retrieval, demonstrating DeSPITE and DeSPIE’s ability to align point clouds, skeletons, and IMU signals, and (2) LIPD-Babel-v2 for contrastive pre-training, where DeSPITE and DeSPIE improve single-modality HAR, surpassing SOTA on HMPEAR [29] and MSR-Action3D [27]. Our contributions are as follows. (i) DeSPITE: A Deep Skeleton-Pointcloud-IMU-Text Embedding model that enables cross-modal matching and retrieval tasks between point clouds, IMU, and skeletons, unlocking applications that were previously impossible, and we will release the resulting pre-trained encoders, code, and data for future research. (ii) We show that DeSPITE is an effective contrastive pre-training strategy for single-modality HAR, demonstrating new state-of-the-art performance. (iii) LIPDBabel, a new dataset for privacy-preserving multi-modal learning, with versions LIPD-Babel-v1 tailored for matching, retrieval tasks, and LIPD-Babel-v2 tailored for HAR. # 2. Related Work # 2.1. Multi-Modal Contrastive Learning for Human Activity Understanding Recent works have explored unified embedding spaces across sensor modalities. ImageBind [15] binds six modalities using image-text pairs, while IMU2CLIP [39] and MotionCLIP [53] align IMU and skeleton data with CLIP’s image-text space. LAVIMO [62] improves skeleton-videotext retrieval, and BabelTower [8] incrementally aligns six sensing modalities, reducing reliance on RGB. Unlike these works, we focus exclusively on privacy-preserving modalities, introducing LiDAR into a joint embedding space with IMU and skeletons. This enables novel retrieval tasks (LiDAR $$ IMU, LiDAR $$ Skeleton) and serves as an effective pre-training strategy for point cloud-based HAR. # 2.2. Pre-Training for Point Cloud Human Activity Recognition While countless general purpose embedding models and foundation models have emerged in the last years for RGB images/videos, natural language, or audio (e.g, ImageBind [15], DinoV2 [41], SAM2 [46], CLIP [45], BERT [9]), pre-trained general-purpose models for (LiDAR) point cloud HAR do not exist yet due to a lack of the same amount of data. To this day, pre-training mainly happens through self-supervision on a small number of datasets or the same dataset before fine-tuning for point cloud HAR. Self-supervision includes temporal order prediction from shuffling in [52, 57], contrastive learning on masked sequences [18, 51], temporal structure prediction [50], or knowledge distillation [63]. All these methods have been shown to improve the performance after fine-tuning for HAR compared to training from scratch. Different from these works, we show that multi-modal contrastive learning between point clouds and other closely related modalities (i.e., skeleton pose and IMU data) leads to improved HAR performance after fine-tuning, showing new possibilities for future research in point cloud HAR. # 2.3. Cross-Modal Matching and Retrieval between Modalities Cross-modal matching assigns a data point from one modality to its correct counterpart in another. Key applications include audio-visual association [17, 23], IMU-based matching to human pose, RGB, or silhouette masks [37, 39, 56], or text-to-motion retrieval [43, 62]. Person Re-ID via IMU signals has been explored in RGB videos [4, 28], silhouette masks [37, 38], and skeletons [1, 56]. Retrieval tasks also exist between skeletons and text [43], skeletons and RGB [62], and IMU and RGB [39], with prior works exploring temporal moment retrieval and database retrieval. However, LiDAR-based cross-modal retrieval remains largely unexplored. Our work extends these approaches by aligning LiDAR, IMU, and skeleton data, enabling novel retrieval tasks such as LiDAR $$ Skeleton and LiDAR ${ \bf \nabla } \cdot \mathrm { \stackrel { ~ } { \mapsto } } \bf { I N U }$ . We further extend IMU interpretability via RGB video retrieval [39] to point cloud and skeleton retrieval, unlocking a new effective way to interpret IMU signals. # 2.4. Datasets for LiDAR Point Cloud Human Activity Recognition Early point cloud HAR datasets, like MSR-Action3D [27] and NTU-RGB $+ \mathrm { D }$ [30, 49] are derived from depth maps and have been foundational in advancing state-of-the-art methods in the field. Datasets with real LiDAR point clouds of human activities are rare. One of the only datasets for human activity recognition are HuCenLife [60], and the recent HMPEAR [29] and MM-Fi [61] datasets. Motivated by multi-modal LiDAR and IMU-based HPE, several datasets have been proposed recently, such as LidarCap [24] and LIPD [47]. In particular, LIPD is a large-scale dataset with human motions of LiDAR point clouds, human skeletons, and IMU data, but without activity annotations. It is a mix of synthetic and real data, where a big part comes from AMASS [36], a large-scale human motion capture dataset. On top of AMASS, Babel [44] and HumanML3D [16] added natural language annotations. For our study, we combine LIPD with its corresponding subset in AMASS to a new dataset, LIPD-Babel, which enriches LIPD through partial human activity annotations. This leads to a large pre-training data resource between human skeletons, IMU, LiDAR point clouds, and text, which we can leverage to explore contrastive learning between these modalities. # 3. Method # 3.1. Problem Statement The goal of this work is to learn a joint embedding space that aligns human motion observed through different privacy-preserving modalities. More specifically, we present DeSPITE, a Deep Skeleton-Pointcloud-IMU-Text Embedding model, which effectively learns a joint embedding space across these four respective modalities through noise contrastive estimation. We train several versions of DeSPITE, where we vary the number of modalities (DeSIE, DeSPE, DePIE, DePITE, ...). When all modalities are used, the text embeddings of CLIP serve as a binding modality, which has been shown to be effective in several recent related works for modalities that capture human motion, such as IMU data and human skeletons [1, 15, 35, 39, 53, 62]. We want to emphasize that the primary goal of DeSPITE is not to show that we can bind skeleton, point cloud, or IMU data to CLIP text embeddings (this has been demonstrated before with, e.g., ImageBind [15], IMU2CLIP [39], MotionCLIP [53], or BabelTower [8]). Instead, our main goal is to present several novel unexplored applications for human activity point cloud sequences that emerge when we unify these modalities into a joint embedding space. # 3.2. Learning Deep Skeleton-Pointcloud-IMU-Text Embeddings (DeSPITE) Human motions represented through LiDAR point clouds, IMU time series, and human pose skeleton data have an inherent correspondence. We leverage this property to learn a joint embedding space where similar sequences of human motions are close and different sequences are far apart. Given a point cloud sequence $X _ { p c } : = \{ p c _ { 1 } , \ldots , p c _ { T } \}$ , with $\begin{array} { r l r } { p c _ { i } } & { { } \in } & { \mathbb { R } ^ { N \times 3 } } \end{array}$ , an IMU sequence $\begin{array} { r l } { X _ { i m u } } & { { } : = } \end{array}$ $\{ i m u _ { 1 } , . . . , i m u _ { T } \}$ , with $i m u _ { i } \in \mathbb { R } ^ { C }$ , and a human pose sequence $X _ { p o s e } ~ : = ~ \{ p o s e _ { 1 } , . . . , p o s e _ { T } \}$ , with $p o s e _ { i } ~ \in$ $\mathbb { R } ^ { 2 4 \times 3 }$ representing 3D positions of 24 skeletal joints, we aim to train neural networks to encode $X _ { p c }$ , $X _ { i m u }$ , and $X _ { p o s e }$ into a shared embedding space. We denote these neural networks as encoders $$ \begin{array} { r } { f _ { p c } : \mathbb { R } ^ { T \times N \times 3 } \mathbb { R } ^ { e } , } \\ { f _ { i m u } : \mathbb { R } ^ { T \times C } \mathbb { R } ^ { e } , } \\ { f _ { p o s e } : \mathbb { R } ^ { T \times 2 4 \times 3 } \mathbb { R } ^ { e } } \end{array} $$ which map the input sequences to embeddings $z _ { p c } = f _ { p c } ( X _ { p c } )$ , $z _ { i m u } = f _ { i m u } ( X _ { i m u } )$ , and $z _ { p o s e } = f _ { p o s e } ( X _ { p o s e } )$ , where $z _ { p c } , z _ { i m u } , z _ { p o s e } \in \mathbb { R } ^ { e }$ . Furthermore, we work with the setting where a natural language description $X _ { t e x t }$ is not provided for each respective $( X _ { p c } , X _ { p o s e } , X _ { i m u } )$ triple. For this reason, the loss for text descriptions is only computed on the subset of the elements in each batch where we have a corresponding quadruple $( X _ { p c } , X _ { p o s e } , X _ { i m u } , X _ { t e x t } , t m )$ , where $t m \in \mathbb { B }$ represents a boolean mask if there exists a text description $X _ { t e x t }$ . In this way, we can effectively ignore the respective elements in the batch $B$ that do not have text descriptions when computing our alignment loss. Following previous works like CLIP [45], ImageBind [15], MotionCLIP [53], and IMU2CLIP [39], we optimize our encoders using a contrastive objective based on Noise Contrastive Estimation (NCE). For a batch of $B$ paired samples, we obtain a boolean mask and embeddings $\left( z _ { p c } ^ { i } , z _ { i m u } ^ { i } , z _ { p o s e } ^ { i } , z _ { t e x t } ^ { i } , t m ^ { i } \right) _ { i = 1 } ^ { B }$ , where $z _ { t e x t } ^ { i }$ is obtained from a frozen CLIP text encoder. For batch elements $i$ without text pairings, we set $t m ^ { i }$ to 0 and use a dummy embedding, $t m ^ { i }$ is set to 1 otherwise. The similarity between embeddings is defined using the cosine similarity: $$ \mathrm { s i m } ( z _ { a } , z _ { b } ) = \frac { z _ { a } \cdot z _ { b } } { \| z _ { a } \| \| z _ { b } \| } , $$ where $z _ { a } , z _ { b } \in \{ z _ { p c } , z _ { i m u } , z _ { p o s e } , z _ { t e x t } \}$ . The contrastive loss for each pair $( i , j )$ in the batch is defined as follows: $$ \mathcal { L } _ { a b } ^ { i } = - \log \frac { \exp ( \sin ( z _ { a } ^ { i } , z _ { b } ^ { i } ) / \tau ) } { \sum _ { j = 1 } ^ { B } \exp ( \sin ( z _ { a } ^ { i } , z _ { b } ^ { j } ) / \tau ) } $$ where $a , b ~ \in ~ \{ p c , i m u , p o s e , t e x t \}$ and $\tau \ > \ 0$ is a (learnable) temperature hyperparameter. Symmetrically, we compute the loss in both directions by swapping the roles of the modalities, i.e., $\mathcal { L } _ { a b } ^ { i }$ and $\mathcal { L } _ { b a } ^ { i }$ , which leads to: $$ \mathcal { L } _ { a , b } ^ { i } = \frac { 1 } { 2 } ( \mathcal { L } _ { a b } ^ { i } + \mathcal { L } _ { b a } ^ { i } ) $$ As our main goal is to align $z _ { p c } , z _ { i m u } , z _ { p o s e }$ , we employ two different losses. First, we bind the subset of paired $z _ { p c } , z _ { i m u } , z _ { p o s e }$ with the respective text embeddings $x _ { t e x t }$ : $$ \mathcal { L } _ { t e x t } ^ { i } = \sum _ { i = 1 } ^ { B } t m ^ { i } \sum _ { a \in M } \mathcal { L } _ { a , t e x t } ^ { i } $$ where $t m ^ { i }$ serves as a mask to ignore the elements in the batch without text pairings for this loss. Second, each individual sensing modality pair $M ^ { \ast } : = \{ ( p c , i m u ) , ( p c , p o s e ) , ( i m u , p o s e ) \}$ is optimized to be close to each other: $$ \mathcal { L } _ { M } ^ { i } = \sum _ { i = 1 } ^ { B } \sum _ { ( a , b ) \in M ^ { * } } \mathcal { L } _ { a , b } ^ { i } $$ In both $\mathcal { L } _ { t e x t }$ and $\mathcal { L } _ { M }$ , we do not weight each modality individually. Finally, we combine both losses to enforce aligning embeddings from the corresponding point cloud, IMU, and pose sequences while constraining them to take small steps toward the text embedding space of CLIP. With $M : = \{ p c , i m u , s k e l e t o n \}$ being the set of modalities to align and $M ^ { * }$ their respective desired pairings, we optimize the following final loss function for each batch: $$ \mathcal { L } _ { t o t a l } ^ { i } = \alpha \mathcal { L } _ { t e x t } + \beta \mathcal { L } _ { M } $$ where $\alpha = 0 . 5 , \beta = 0 . 5$ equally weight both loss terms. In our experiments, we train models of all possible modality combinations, which requires an according change to the modality set $M$ and the respective pairings $M ^ { * }$ (e.g., training only DeSPE, then $\begin{array} { l l } { M } & { : = } \end{array}$ $\{ s k e l e t o n , p o i n t c l o u d \}$ ). Finally, when training a model like DeSPE without text pairings, the overall loss simplifies to Equation 5, so that $\mathcal { L } _ { t o t a l } ^ { i } = \mathcal { L } _ { M }$ . # 4. Experiments We evaluate the effectiveness of DeSPITE and its variants on the following tasks: Modality matching, temporal moment retrieval using a different modality as a query, pretraining for point cloud human activity recognition, and several qualitative evaluations. # 4.1. Datasets We train our method on a merged version of LIPD [47] and Babel [44] (denoted as Babel+LIPD), where we map the text annotations from Babel to the AMASS [36] subsets present in LIPD. In this way, we are able to construct a large-scale dataset of real and synthetic LiDAR point cloud, IMU, and skeleton data with text annotations. To be more specific, we construct two versions of LIPD-Babel. First LIPD-Babel-v1, where we use the official train-test split of $\mathrm { L I P D ^ { 1 } }$ , including DIP [21] and TotalCapture (TC) [54]. Second LIPD-Babel-v2, where we use the train-val split of Babel for the AMASS subsets, and add all the remaining data of LIPD to the training set. As LIPD is provided in 10 FPS, we downsample the Babel annotations to 10 FPS. After preprocessing the whole dataset with sliding windows of length 24, we obtain $5 0 2 , 9 5 8 / 8 5 , 5 5 1$ training/testing windows for LIPD-Babel-v1, from which 85,551 training windows have text annotations, and 403,430 / 58,802 train/test windows for LIPD-Babel-v2, with 135,699 text training windows and 58,802 test annotations. Regarding downstream task performance for HAR, we evaluate our approach on HMPEAR [29], MSRAction3D [27], and our Babel-LIPD-v2 train/test split that only includes Babel sequences. Both HMPEAR and MSRAction3D include domain shifts, where HMPEAR uses a different kind of LiDAR sensor, and MSR-Action3D has very dense point clouds derived from depth maps. # 4.2. Experimental Design and Metrics We use the following tasks to evaluate the performance of DeSPITE (and its variants) and enable future research to compare against our baselines. Throughout all models in our experiments, all hyperparameters are kept the same. # Task 1. Matching between Modalities In multi-person scenes, matching IMU data to detected individuals in point cloud sequences is a challenging upstream task, which has not been explored before. This task can be generalized to an any-to-any modality matching problem, which we even further evaluate with this task. We evaluate all modality combinations $\mathbf { I M U } { } \mathbf { P C }$ , $\scriptstyle { \mathrm { I M U } } ,$ Skeleton, and $\mathrm { P C } $ Skeleton. For each test set (LIPD-Test, TC, DIP), we generate 1000 artificial multiperson scenes (following designs in prior works [37, 40]). This is achieved by randomly sampling $n$ sequences from the test set first and then sampling a respective subsequence, leading to $n$ artificial subjects carrying out an activity simultaneously. The number of subjects per scene varies $n \ \in \ ( 2 , 4 , 8 , 1 2 , 1 6 , 2 0 , 2 4 , 2 8 , 3 2 )$ , simulating different real-world scenarios. Given $n$ subjects, we report matching accuracy through argmax on the cosine similarities per row between all candidates. # Task 2. Temporal Moment Retrieval between Modalities Given a short snippet in one modality, the goal is to retrieve the corresponding temporal moment in the sequence observed with another modality. This task has been explored for, e.g., IMU-RGB [39] and skeleton-text [43], but not yet for LiDAR point clouds, IMU, and skeletons. We evaluate this on the three held-out test sets of LIPD (LIPD-Test, TC, DIP) using Recall $\boldsymbol { \mathcal { \Theta } } \mathbf { k }$ $( k \ : = \ : 1 , 1 0 , 2 0 , 5 0 )$ shots across all modality combinations. Performance is measured by computing the cosine similarity scores for all possible querytarget pairs in all individual test set sequences and returning the top- $\mathbf { \nabla } \cdot \mathbf { k }$ similar frame indices. For each query, we compute the difference between all top- $\mathbf { \nabla } \cdot \mathbf { k }$ returned time points against the ground truth. A retrieval is considered to be correct if it is within 10 frames $( \sim 1 . 5 s e c )$ of the ground truth. As the final score, the mean over all recall $@ \mathbf { k }$ scores of all sequences for a dataset is reported. # Task 3. Pre-Training for Human Activity Recognition We evaluate cross-modal pre-training for point clouds, IMUs, and skeletons via linear/non-linear probing and finetuning. HAR pre-training/testing is done on LIPD-Babelv2, with additional point cloud testing on HMEPAR and MSR-Action3D. Results follow standard metrics: clip segment accuracy for MSR-Action3D, segment accuracy for HMEPAR and LIPD-Babel-v2 (excluding transition labels). We do not evaluate with additional skeleton/IMU datasets, since transfer learning is strongly limited by serious datasetspecific variations in joints and different IMU channel counts for these modalities. # Task 4. Retrieval between Modalities from Database We qualitatively evaluate retrieval from a “large database” between Point Cloud ${ \bf \Pi } \mathbf { I M U }$ , IMU $$ Skeleton, and Point Cloud $$ Skeleton. This enables motion analysis across representations, aiding interpretability (e.g., skeletons or point clouds simplify IMU visualization). # 4.3. Implementation Details For point clouds, we use the PST-Transformer [13] with a SimCLR-based projection head [6]. IMU is encoded with a 2-layer LSTM [19], skeletons with the ACTOR encoder [42], and text with a frozen CLIP text encoder [45]. All models are pre-trained for 145 epochs with 512-d embeddings, Adam optimizer [22], $\scriptstyle 1 \mathrm { r = 1 e } - 4$ , batch size 1024. We subsample 256-points using farthest point downsampling (FPD) on each frame and use 24-frame windows as input to all models. Augmentations (random translation, scaling, Gaussian noise) are employed during training to prevent overfitting. For a fair comparison, we only use the weights from epoch 145 across all models. HAR finetuning roughly follows [13], with batch size 24, 35 epochs (SGD [48], warmup to $\scriptstyle 1 \mathrm { r = 0 } . 0 1$ , 0.1 decay at epochs 20, 30). In HMPEAR, we subsample 1024 points using FPD and use 24-frame windows. In MSR-Action3D, we follow the standard 2048-point, 24-frame window setting. # 4.4. Results: Multi-Person LiDAR-IMU Matching Figure 2 shows our results for matching between IMU $\looparrowright$ , IMU $$ Skeleton, and $\mathrm { P C } $ Skeleton across all trained model variants (all specific numbers in Appendix 7). The subjects are varied on the $\mathbf { \boldsymbol { x } }$ -axis, and matching accuracy is reported on the y-axis. Each row corresponds to the respective test set (TC, DIP, LIPD). First, our experiments reveal that models trained with text (i.e., DeSPITE, DePITE, DeSITE, DeSPTE) in almost all scenarios perform worse than models trained solely on the modalities alone (i.e., DeSPE, DePIE, DeSIE, DESPIE), showing that this task does not benefit from text embeddings. Second, we find that matching between IMU, point clouds, and skeletons can be effectively learned, showing up to perfect matching scores for a smaller number of subjects. In comparison, a larger number of subjects, as expected, becomes more challenging. # 4.5. Results: Temporal Moment Retrieval Figure 3 shows our results for temporal moment retrieval between $\mathrm { I M U } $ Pointcloud, $\mathbf { I M U } $ Skeleton, and Pointcloud $$ Skeleton across all trained model variants (all specific numbers in Appendix 7). The $k$ retrieval shots are varied on the x-axis, and the respective recall $@ \mathbf { k }$ is reported on the y-axis. Each row shows the results for each respective test set (TC, DIP, LIPD). First, we observe the same result for temporal moment retrieval as for matching in Figure 2: All models trained with text perform worse than models trained solely on the modalities alone. Second, our evaluation demonstrates that temporal moment retrieval can be solved the best between $\mathrm { I M U } $ Skeleton, where DeSIE demonstrates that training between both modalities alone is very effective. The runner-up is Pointcloud $$ Skeleton where DeSPIE and DeSPE achieve almost identical performance. Finally, our experiments reveal that the most challenging problem is $\mathrm { I M U } $ Point cloud matching, allowing future work to propose more effective solutions. Figure 2. Matching performance between all modality pairs $\mathrm { I M U } $ Pointcloud, $\mathrm { I M U } $ Skeleton, Pointcloud $$ Skeleton, reporting mean accuracy for $n ~ \in ~ ( 2 , 4 , 8 , 1 2 , 1 6 , 2 0 , 2 4 , 2 8 , 3 2 )$ ) subjects across 1000 artificial scenes. Figure 3. Temporal moment retrieval performance across all modalities. Recall is reported for top-1, 10, 20, and 50 retrievals, considering a match correct if within 10 frames of the ground truth ( half the window size). # 4.6. Results: 3D Point Cloud Human Activity Recognition The pre-trained embeddings of all versions of DeSPITE can be fine-tuned for HAR. We compare our approach against the recent state-of-the-art on MSR-Action3D, HMPEAR, and perform ablations on the LIPD-Babel-v2 split. MSR-Action3D: Table 1 shows that fine-tuning DeSPITE, DeSPIE, or DePITE embeddings surpasses all current state-of-the-art point cloud HAR pre-training methods, despite encountering a domain shift from 256 to 2048 points. Our approach, combined with PSTTransformer, even outperforms PvNext [58] $( 9 4 . 7 7 < 9 5 . 4 7 )$ ) and MAMBA4D [31] $( 9 3 . 3 8 < 9 5 . 4 7 )$ ) and nearly matches KAN-HyperpointNet [7] $( 9 5 . 5 9 > 9 5 . 4 7 \$ ). HMPEAR: As shown in Table 2, we achieve new SOTA on HMPEAR, outperforming all prior point cloud, RGB, and multi-modal approaches. While our setup uses twice the frames of previous methods, pretraining PSTTransformer in the same setup with DeSPITE, DeSPIE, or DePITE improves its performance by nearly $4 \%$ , demonstrating the effectiveness for HAR pre-training. LIPD-Babel-v2: Table 3 shows that all our models outperform baselines (PST-Transformer, LSTM, ACTOR) when trained from scratch on LIPD-Babel-v2. We explore various freezing strategies, as well as linear/non-linear probing and projection heads, with detailed ablations in the supplementary material (Tables 6,7,8). In Table 3, only the best results of DePITE, DeSPIE, and DeSPITE are reported, which consistently achieve strong performance across all three datasets. Notably, across MSR-Action3D and HMPEAR, DeSPITE, DeSPIE, and DePITE consistently achieve the best performance, underlining the advantage of pre-training with more modalities. Furthermore, different from the results for matching and temporal moment retrieval, we find that training with text benefits the fine-tuning performance for HAR. Table 1. 24-frame classification results on the MSR-Action3D dataset, clip-level accuracy (Acc) is reported. Table 2. HAR classification results on the HMPEAR action recognition dataset, segment-level accuracy Acc(Seg) is reported. # 4.7. Qualitative Results: Embedding Space Using TSNE [55], we analyze the learned embedding space of DeSPITE and DeSPIE. Figure 4 (a, b) shows embeddings of the same 50-frame sequence per modality (skeletons $\bullet$ , point clouds $\times$ , IMU $\star$ ), where both models exhibit strong cross-modal alignment, although DeSPIE demonstrates tighter associations. Figure 4 (c, d) extends this to 20 sequences, revealing distinct clusters that indicate semantic motion encoding. However, DeSPIE’s embeddings are more distinct, qualitatively supporting our retrieval findings and confirming that text embeddings negatively affect matching and temporal moment retrieval performance. Table 3. HAR classification results on the Babel-LIPD-v2-CLS action recognition dataset, segment-level accuracy $\operatorname { A c c } ( \operatorname { S e g } )$ is reported. # 4.8. Qualitative Results: Retrieval from AMASS and LIPD Database Figure 5 shows that we can interpret IMU signals using our method by querying a large motion database like AMASS or LIPD. We show retrievals of skeletons from AMASS and point clouds from LIPD using IMU (top) as a query, also showing the ground truth. The retrievals semantically capture the motion performed by the IMU signal, allowing us to understand that the IMU signal corresponds to walking while turning (left), doing a lunge (middle), and forward stretch (right), respectively. Extending IMU2CLIP [39], our method can help interpret IMU signals with skeletons and point cloud sequences. Figure 6 shows the corresponding retrievals between skeleton and point clouds. On the left, a pedestrian performs a “lunge” motion, captured effectively by the retrieved skeletons from the AMASS database. On the right, a pedestrian performs a t-pose and then moves his arm into a normal standing position. The retrieved point clouds from the LIPD database follow these motions, showing a learned correspondence of motion between both modalities. We can see that different motion sequences are retrieved with different point cloud densities. To better assess the effectiveness of DeSPITE for cross-modal retrieval, we provide animated videos in the supplementary material. IMU Accelerometer Accelerometer Accelerometer Query GT 汉人有公舞有囊费药行 Skeleton pose retrievals LiDAR Point Cloud retrievals R@1 海東舞富 R@2 武#美自有公肉 R@3 安美夫主舞青鼎中力司賈象 Frame@ 0 6 12 18 0 6 12 18 0 6 12 18 Figure 4. Using TSNE, we visualize the joint embedding space between skeletons $\mathbf { \rho } ( \bullet )$ , point clouds $( \times )$ , and IMU $( { \star } )$ on 1 (a,b) and 20 (c,d) randomly sampled sequences with 50 consecutive sequences of both DeSPIE (left) and DeSPITE (right). Each point is colored by its time index with a colormap to emphasize the similarity among each of the modalities over time. Figure 5. IMU $$ Skeleton and IMU $$ Point cloud Retrieval from AMASS and LIPD database, respectively. Figure 6. Point cloud $$ Skeleton and Skeleton $$ Point cloud Retrieval from AMASS and LIPD database, respectively. Figure 7. We show two random IMU queries to localize the respective moment in a 1400 frame-long point cloud sequence from the unseen LIPD test. # 4.9. Qualitative Results: Temporal Moment Retrieval Figure 7 illustrates how effectively an encoded IMU query can retrieve relevant moments in a point cloud video. We visualize cosine similarity across a 1400-frame sequence containing diverse activities, with peaks aligning precisely Point CloudàSkeleton SkeletonàPoint Cloud Query AA GT AJAA rtTT R@1 ATTTT R@2 tTtt R@3 寶寶 Frame@ 0 6 12 18 0 6 12 18 Query 1 Query 2 0.50 0.5 wlMn 0.25 wWm 0.00 0.0 −0.25 0 200 400 6008001000 1200 1400 0 200 400 600 80010001200 1400 Time Time -Ground Truth with the ground truth timestamps. Despite no explicit training for this, in Query 2, our approach identifies repeated instances of a person standing still, highlighting the ability of DeSPITE to encode semantic meaning for certain activities within the embedding space. We provide an animated video of this in the supplementary material (“temporal moment retrieval/temporal retrieval imu.gif”).
Despite LiDAR (Light Detection and Ranging) being an effective privacy-preserving alternative to RGB cameras to perceive human activities, it remains largely underexplored in the context of multi-modal contrastive pre-training for human activity understanding (e.g., human activity recognition (HAR), retrieval, or person re-identification (RE-ID)). To close this gap, our work explores learning the correspondence between LiDAR point clouds, human skeleton poses, IMU data, and text in a joint embedding space. More specifically, we present DeSPITE, a Deep Skeleton-Pointcloud-IMU-Text Embedding model, which effectively learns a joint embedding space across these four modalities through noise contrastive estimation. At the heart of our empirical exploration, we have combined the existing LIPD and Babel datasets, which enabled us to synchronize data of all four modalities, allowing us to explore the learning of a new joint embedding space. Our experiments demonstrate novel human activity understanding tasks for point cloud sequences enabled through DeSPITE, including Skeleton<->Pointcloud<->IMU matching, retrieval, and temporal moment retrieval. Furthermore, we show that DeSPITE is an effective pre-training strategy for point cloud HAR through experiments in MSR-Action3D and HMPEAR.
[ "cs.CV" ]
# 1 Introduction LLM-based agentic AI systems combine multi-step reasoning with external tools and memory to solve open-ended tasks such as code generation, web navigation, planning, and transactional services like booking and ordering [Acharya et al., 2025]. By doing so, they extend to complex, real-world problems beyond standard LLM benchmarks. Since such real-world applications serve speakers of diverse languages, maintaining consistent reliability in every language becomes critical. However, since agentic behavior is grounded in LLMs, which often perform inconsistently across languages [Deng et al., 2023, Wang et al., 2023], agents may inherit these multilingual limitations as well, affecting their functionality and trustworthiness. This presents a barrier to equitable access, as non-English users may face degraded responses, incorrect tool actions, or unsafe behaviors—failures that can lead to actual harm in the real world, including erroneous transactions, data corruption, and other security vulnerabilities [Zhang et al., 2024]. To assess emerging agentic systems, various benchmarks have been proposed to evaluate agent performance across a range of tasks [Mialon et al., 2023, Jimenez et al., 2023, Chang et al., 2024, Xu et al., 2024]. However, these benchmarks remain English-only. In contrast to multilingual LLM benchmarks [Dang et al., 2024, Shi et al., 2022, Goyal et al., 2022], no equivalent exists for agentic AI tasks—creating a blind spot in our understanding of cross-language performance, safety, and security. In this paper, we address this gap. We hypothesize that multilingual settings will reveal performance and security gaps in agentic systems that are not captured by the existing, English-only benchmarks. To investigate this, we introduce MAPS, a Multilingual Agentic AI Benchmark Suite for Performance and Security. MAPS is based on four established agentic benchmarks: GAIA (real-world tasks) [Mialon et al., 2023], SWE-bench (code generation) [Jimenez et al., 2023], MATH (mathematical reasoning) [Hendrycks et al., 2021], and the Agent Security Benchmark (security) [Zhang et al., 2024]. These benchmarks are extended to ten typologically diverse languages beyond English2, by employing a hybrid machine and LLMbased translation approach Ki and Carpuat [2024] with extended verification and enhancements. In total, MAPS includes 805 unique tasks, each available in 11 language versions—including the original English and 10 translated variants—for a total of 8,855 instances. An overview of the benchmark structure is shown in Figure 1. Figure 1: MAPS benchmark suite evaluates LLM-based agents across 11 languages and 4 agentic benchmarks covering performance and security. To demonstrate the use of MAPS and test our hypothesis, we selected a leading open-source agent associated with each of the four original benchmarks and applied it to the corresponding multilingual extension. We observed notable declines in both task performance and security when moving from English to other languages, with the severity of these drops varying by task type and correlating with the proportion of non-English input, suggesting that multilingual performance interventions should be targeted based on input composition and task sensitivity. Beyond overall degradation, our findings reveal that multilingual inputs can amplify agentic vulnerabilities in safety-critical tasks, highlighting the need for multilingual risk assessment. These results empirically support our hypothesis and demonstrate the utility of MAPS as a tool for systematic, multilingual evaluation of agentic AI systems. The primary contributions of this paper are threefold: • To the best of our knowledge, we introduce the first multilingual benchmark suite for agentic AI, extending four widely used benchmarks into ten typologically diverse languages for systematic performance and security assessment. • The efficacy and quality of the proposed benchmark are demonstrated through a large-scale evaluation of leading agents as well as human expert verification. • We present the first quantifiable analysis and evidence that multilingual settings reveal critical performance, safety, and security gaps in agentic systems, along with actionable recommendations for improving their development. # 2 Background and Related Work # 2.1 Agentic AI Benchmarks With the rapid advancement of LLM-based agents, a diverse suite of benchmarks has been developed to assess their autonomy, tool use, planning, and memory integration [Yao et al., 2024, Xu et al., 2024, Yehudai et al., 2025]. We organized these suites along three primary dimensions. Evaluation objective: performance-oriented benchmarks measure task completion, multi-step reasoning, and correct tool invocation (e.g., AgentBoard [Chang et al., 2024]), whereas security-focused suites probe robustness to adversarial inputs, jailbreaks, and unsafe behaviors (e.g., AgentHarm [Andriushchenko et al., 2024]). Agentic task scope: full-agentic evaluations present only problem statements and expected outcomes, requiring end-to-end planning and execution (e.g., GAIA [Mialon et al., 2023]), while semi-agentic frameworks supply scaffolding, such as code templates or mock APIs, to isolate the LLM’s reasoning and tool-selection core (e.g., AppWorld [Trivedi et al., 2024]. Design and evaluation characteristics: most benchmarks span a limited set of domains (three to five use cases), typically including real-world information retrieval and navigation (e.g., AssistantBench [Yoran et al., 2024]), code generation (e.g., SWE-Bench [Jimenez et al., 2023]), reasoning and planning (e.g., MATH [Hendrycks et al., 2021], Travel Planner [Xie et al., 2024]), and security scenarios (e.g., Agent Security Benchmark [Zhang et al., 2024]). They use fixed task counts and predefined difficulty tiers, and to enable reliable, objective measurement despite agents’ open-ended capabilities, they often restrict tasks to closed-form problems with definitive ground truth, allowing clear determination of success or failure [Jimenez et al., 2023, Mialon et al., 2023]. A detailed comparison of benchmark design choices, task types, and evaluation properties is provided in the supplementary materials. While multilingual LLM benchmarks such as XTREME [Hu et al., 2020], FLORES [Goyal et al., 2022], and SIB-200 [Adelani et al., 2023] have enabled broad cross-lingual evaluation, they do not assess interactive decision-making, tool use, or task execution, which are core elements of agentic systems. As a result, existing multilingual benchmarks fall short of capturing the complexities and vulnerabilities that arise when agents operate in non-English settings. This leaves non-English users exposed to agentic failures in their native languages and underscores the need for fully agentic benchmarks that include performance and security evaluation, high data fidelity, and comprehensive multilingual assessment - gaps that our benchmark is specifically designed to address. # 2.2 Multilingual Benchmarks and Multilingual Limitations of General-Purpose LLMs Recent studies show that pre-trained LLMs often struggle with non-English input, especially in languages with limited training resources or those typologically distant from English. Multilingual benchmarks such as XTREME [Hu et al., 2020] and XGLUE[Liang et al., 2020] report consistent accuracy drops when moving from English to languages such as Swahili or Nepali. These gaps reflect an imbalance in pretraining corpora, where English accounts for over $9 0 \%$ of the data, as well as challenges in tokenizing morphologically rich languages and the scarcity of fine-tuning data in many languages [Jha, 2024]. Notably, even large models (e.g., GPT-4, Llama 405B) face a “cross-lingual knowledge barrier” on MMLU [Hendrycks et al., 2020] and on Safety tasks [Grattafiori et al., 2024], showing that scale alone does not resolve multilingual performance deficits [Chua et al., 2024]. Building on these performance gaps, LLMs also face robustness and security challenges in multilingual contexts. Since most alignment and red-teaming efforts have been English-centric, models are more prone to generate toxic or policy-violating outputs when processing non-English prompts [Deng et al., 2023, Wang et al., 2023, Aakanksha et al., 2024]. Furthermore, hallucination rates increase and confidence calibration degrades outside English, causing models to produce fluent, yet unreliable or potentially harmful content in undersupported languages [Xue et al., 2024]. Although security interventions, such as multilingual alignment, have been shown to be effective in reducing harmful output between languages, they often incur a measurable cost in downstream performance or increased latency [Aakanksha et al., 2024]. Given that agentic AI systems are based directly on these LLMs, we hypothesize that they inherit the same language-dependent performance and security limitations. As these agents carry out real-world tasks such as executing code, querying external tools, or navigating web environments, any inherited shortcomings can lead to severe consequences. Yet, to our knowledge, no systematic evaluation has probed how these vulnerabilities manifest within agentic systems. To address this gap, we introduce MAPS, our multilingual agentic benchmark suite in Section 3. # 3 MAPS: Multilingual Agentic AI Benchmark Suite To support multilingual evaluation of agentic systems, we construct a benchmark suite by extending established English-language datasets into multiple languages. This process requires careful dataset Φenhancment(s,M(s,Lt),Lt) I(s,refined translation) English Mc.tn) A(s,M(,L),.) True TranslationRefinement Trefinedn Intheity Translation Translation Translation Basic Preservation Φdirect(s,Lt) LLM LLM Verification True False language $L _ { t }$ NMT LLM False Return LLM Return Return Translation Refined Machine LLM Translation Translation selection, translation procedures that preserve semantic and structural integrity, and mechanisms for ensuring evaluation consistency. The following subsections detail our methodology for translation, benchmark construction, and dataset composition. # 3.1 Translation Pipeline Reliable multilingual evaluation of agentic AI systems hinges on translating task instructions with both semantic and structural cross-language fidelity. Neural MT excels at preserving format and structure, but struggles in low-resource or specialized domains [Koehn and Knowles, 2017, Aharoni et al., 2019]. Translation via instructed LLMs offers broader high-level capabilities at the cost of occasional hallucinations and semantic drift [Hendy et al., 2023, Yan et al., 2024]. To balance these trade-offs, hybrid pipelines were suggested by Ki and Carpuat [2024], Mohammed [2025], combining format-preserving MT with LLM-based refinement. For MAPS, we extend Ki and Carpuat [2024]: First, Ki and Carpuat [2024] was not designed with our benchmarks in mind, thus significant per-benchmark prompting had to be done. Second, we added automated quality checks, fallbacks, and expert verification to ensure the cross-language fidelity needed for agentic benchmarks (Fig. 2). Formally, let us express our translation pipeline as a function $T : S \times \mathcal { L } \mathcal { T }$ , where $s \in { \mathcal { S } }$ is a task-instruction instance in source-language (English), $L _ { t } \in \mathcal { L }$ is the target language, and $t \in \tau$ is the resulting translated output. The pipeline begins with machine translation (MT) to establish a structural foundation: Denote ${ M } ( s , L _ { t } )$ , the MT function, implemented as a high-quality, off-the-shelf NMT system. Its output provides a structurally faithful baseline for subsequent steps. Next, we apply a verification step using an LLM to assess whether the translation adequately preserves the source meaning. This is modeled as a binary function $A ( s , M ( s , L _ { t } ) , L _ { t } ) \stackrel { \textstyle \left. } { \right. } 0 , 1$ , where the LLM compares the original and translated texts to detect major semantic errors or omissions. Based on verification outcomes, the pipeline follows one of two distinct paths. If $\overset { \cdot } { A } = 0 \overset { \cdot } { \underset { \cdot } { \cdot } }$ ), indicating machine translation failure, the pipeline employs direct LLM translation: Denote $\Phi _ { \mathrm { d i r e c t } } ( s , L _ { t } )$ the output of an LLM prompted to directly translate $s$ to language $L _ { t }$ (without using the MT output). If $\ A = 1$ ) indicating acceptable machine translation, an LLM enhances the translation while preserving its basic structure: Denote $\Phi _ { \mathrm { e n h a n c e m e n t } } ( s , M ( s , L _ { t } ) , L _ { t } )$ as the output of an LLM, guided to refine and improve the MT output while maintaining structural consistency. To ensure semantic integrity, we apply a second binary check: $I ( s , \Phi _ { \mathrm { e n h a n c e m e n t } } ) 0 , 1$ . This integrity check detects common LLM failure modes, such as hallucinations, omissions, misinterpretations (e.g., answering instead of translating), and semantic drift. If this verification fails, we revert to the original machine translation (which passed the initial verification test). The added conditional steps form a robust decision framework: If machine translation is rejected, we use a direct LLM translation; if accepted but enhancement fails integrity verification, we fall back to the machine translation; Otherwise, we use the enhanced translation. This structure ensures graceful degradation, favoring conservative outputs when refinement is unreliable. Formally, the final translation is given by: $T ( s , L _ { t } ) = \left\{ \begin{array} { l l } { \Phi _ { \mathrm { d i r e c t } } ( s , L _ { t } ) , } & { \mathrm { i f ~ } A ( s , M ( s , L _ { t } ) , L _ { t } ) = 0 } \\ { M ( s , L _ { t } ) , } & { \mathrm { i f ~ } A ( s , M ( s , L _ { t } ) , L _ { t } ) = 1 } \\ { \Phi _ { \mathrm { e n h a n c e m e n t } } ( s , M ( s , L _ { t } ) , L _ { t } ) , } & { \mathrm { o t h e r w i s e } } \end{array} \right.$ and $I ( s , \Phi _ { \mathrm { e n h a n c e m e n t } } ( s , M ( s , L _ { t } ) , L _ { t } ) ) = 0 \ .$ To ensure the reliability of this pipeline across languages and task types, we conducted human verification on a representative subset of translations. Evaluation design and results are detailed in Subsection 3.3, with additional implementation details in the Supplementary Material. # 3.2 Dataset Selection and Composition Dataset Selection. To support robust multilingual evaluation across agentic capabilities, we construct MAPS benchmark suite based on established agentic AI benchmarks. Those were selected based on four criteria: (1) strong adoption and recognition within the research community, including prior use in agentic evaluation; (2) clearly defined, closed-form answers to enable controlled evaluation; (3) sufficient difficulty to challenge current open-source agents without saturating performance; and (4) practical solvability, ensuring that multilingual degradation can be meaningfully measured. Based on these criteria, we selected four datasets spanning real-world reasoning, software engineering, mathematical problem solving, and security assessment. GAIA. GAIA [Mialon et al., 2023] is a benchmark designed to evaluate agents’ performance on realworld assistant tasks. It includes curated questions that require multi-step reasoning and autonomous use of tools such as web browsers, code interpreters, or document analyzers. Each question has a single correct answer, and responses are evaluated by an exact match to a reference output. SWE-bench. SWE-bench [Jimenez et al., 2023] is a software engineering benchmark constructed from real GitHub issues and associated pull requests across popular Python repositories. Each task presents a bug report and a codebase snapshot, and requires the agent to evaluate whether a proposed patch correctly resolves the issue. We adopt the verified subset3, in which agents are tasked with validating a patch rather than generating one. MATH. The MATH dataset [Hendrycks et al., 2021] includes high-school level mathematical problems across seven topics, including algebra, geometry, and calculus. Tasks are structured to require symbolic manipulation and multi-step reasoning. Agent responses are evaluated by exact match against a reference solution. Agent Security Benchmark (ASB). ASB benchmark [Zhang et al., 2024] provides a structured evaluation of agent robustness against adversarial threats, including prompt injections, memory poisoning, and tool misuse. Agents interact with injected prompts or environments, and evaluation is based on whether safety policies are violated, measured by attack success rate and refusal rate. Data Composition. The metadata below summarizes the multilingual extension, including language coverage, scale, and pre-processing. Translated Languages. We selected the following ten typologically and geographically diverse languages: German, Spanish, Portuguese (Brazil), Japanese, Russian, Italian, Arabic, Hebrew, Korean, and Hindi. This selection enables the evaluation of agent performance across a wide range of scripts, linguistic structures, and regional user populations. Dataset Handling. To preserve the integrity and utility of the original datasets, we applied only minimal and targeted modifications. Across all datasets, we appended translations without modifying or removing any original metadata (such as task type, difficulty level, tools available, etc). Domainspecific syntax—such as equations in MATH, code snippets in SWE-bench, and adversarial prompts in ASB—was preserved exactly, maintaining the original task structure and technical fidelity. For MATH and SWE-bench, which were not originally designed for agentic evaluation, we further applied selective filtering to retain only the most challenging tasks based on the task difficulty field. This follows common practice in prior work to align non-agentic datasets with agentic evaluation settings [Wu et al., 2023], by ensuring meaningful evaluation of agent behavior while avoiding trivial cases. Data Volume. To balance performance and security evaluation, our benchmark comprises 805 tasks: 405 from performance-oriented datasets (GAIA, SWE-bench, MATH) and 400 from the Agent Security Benchmark. We selected 165 tasks from GAIA (full validation set), 140 high-difficulty tasks from MATH (20 per topic across 7 topics), and 100 hard and medium tasks from SWE-bench. The remaining 400 tasks include all security-relevant prompts from ASB. Each task was translated into 10 target languages, and combined with the original English version, this results in a total of 8, 855 multilingual tasks across 11 languages. To validate the benchmark’s utility and examine multilingual effects, we applied a leading agent to each dataset. Full details and performance results are reported in Section 4. # 3.3 Translation Implementation and Verification Translation Implementation Details. We implemented the hybrid translation pipeline described in Section 3.1 using a combination of commercial and open-source tools. For machine translation, we used the Google Translate NMT API 4, selected for its support across all ten target languages. To preserve task fidelity, structural elements (e.g., equations, variables, code) that MT systems often mistranslate were temporarily masked and restored after translation. For LLM-based refinement and quality control, we used Cohere Command A and GPT-4o, both multilingual models executed with deterministic decoding (temperature set to zero) to ensure output consistency. System prompts were crafted individually for each task to accommodate domain-specific structures (e.g., code snippets, equations, web URLs), ensuring that the models preserved both intent and format. The code is available 5 and representative examples of these prompts are provided in the Supplementary Material. Human Verification Protocol. To assess translation quality, we manually verified a representative sample of 2, 000 translations, covering $2 5 \%$ of the benchmark, proportionally sampled across datasets and languages. Each item was rated by a bilingual annotator fluent in English and the target language on a $1 ^ { \circ } 5$ Likert scale across three criteria: adequacy (semantic fidelity), fluency (grammatical and stylistic naturalness), and formatting accuracy (preservation of LaTeX, code, etc.). A fourth metric, answerability, measured whether the translation preserved intent well enough for the annotator to confidently answer the question as if it were in English. Annotator instructions are provided in the Supplementary Material. To validate the reliability of the verification process, we embedded “honeypot” samples with intentional errors; annotators reliably flagged these cases, confirming attentiveness and quality control. Evaluation results confirm high translation quality across the benchmark, with an answerability rate of $9 4 . 4 \%$ , corresponding to a total error rate of $5 . 6 \%$ . Translations also received average scores of 4.47 for adequacy, 4.60 for fluency, and 4.75 for formatting accuracy (on a 1–5 Likert scale), supporting the benchmark’s preservation of semantic fidelity, linguistic naturalness, and structural integrity. Full per-language results and analysis are included in the Supplementary Material. To support high-precision use cases, we also release a “verified”6 subset of the benchmark, consisting of 190 translations per language that passed human review across all four datasets. # 4 Experiments We now demonstrate MAPS utility through multilingual evaluation of leading open-source agents. # 4.1 Experimental Settings Agent Assignment per Dataset. To demonstrate the utility of our benchmark, we evaluate opensource agents on each dataset and assess their performance and robustness under multilingual settings. While a unified agent would offer a more broad coverage and controlled evaluation, current systems lack the generalization needed to perform well across diverse tasks [Gioacchini et al., 2024, Chang et al., 2024]. To isolate the effect of language variation, we retain each agent’s original configuration, including tools, prompts, and system settings, without any modification. Figure 3: Performance of open-source agents across languages on four agentic benchmarks: GAIA, SWE-bench, MATH, and ASB. Each bar represents the agent’s accuracy (or attack success rate in ASB) for a given language, with English shown as the baseline. Error bars indicate std across three runs. Performance differences reflect each agent’s degradation or resilience in multilingual settings. For GAIA, we used the OpenDeepResearch agent [von Platen et al., 2024], which integrates retrieval, web browsing, and tool use to support real-world reasoning. For MATH, we adopted MathChat [Wu et al., 2024], a zero-shot agent combining multi-turn reasoning with Python execution and the Wolfram Alpha tool. For SWE-bench, we applied SWE-agent [Yang et al., 2024b], which enables autonomous software reasoning through repository navigation, file editing, and test execution. For ASB, we built on the authors’ existing infrastructure and evaluated the original ten-agent system against both direct and indirect prompt injection attacks across a variety of tasks and languages. Each agent was executed using the LLM backbone reported in its original implementation, all of which are considered multilingual models. Specifically, OpenDeepResearch used GPT o1, MATHChat used GPT-4, SWE-bench used GPT-4.1, and ASB used Qwen2. Full configuration details, including model versions and API providers, are provided in the Supplementary Materials. Experiment Protocol. For each benchmark, the agent was evaluated three times in each of the 11 target languages, yielding a total of 33 runs per dataset. We report the mean and standard deviation over these runs. We used the original English task definitions and their translations, without modifying or translating internal agent logic and processing flows like system prompts or tools. Metrics. We adopt the original evaluation metrics from each benchmark to ensure consistency with prior agent evaluations. For MathChat (Math), we report answer accuracy. For OpenDeepResearch (GAIA), we report the percentage of answers matching either the English or translated reference. For SWE-agent (SWE-bench), we report the percentage of resolved instances, defined as the percentage of submitted patches that successfully resolve the coding issue. For the ASB agent, we report the attack success rate (ASR), a standard metric in the security domain that represents the percentage of attacks that bypass the safety mechanisms. Additionally, we introduce a new metric: Multilingual Effect, which quantifies the performance or security gap between English and the average of all other languages. Given an evaluation metric $M$ , the Multilingual effect is defined as follows: $$ \mathrm { M u l t i l i n g u a l ~ E f f e c t } = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } M _ { \mathrm { l a n g } _ { i } } - M _ { \mathrm { e n } } $$ 40 b) a) TaskAccuracy Degradation Security Degradation (ASR 35 Spanish -31.3% -5.9% 18.8% 30%Accuracy 0.0% 30Degradation 85% Russian -18.0% 27.1% 25 Portuguese (br) -25.8% 31.2% Korae -3080-4- 18.5% DegrSeacurty 82% 0.4% Italian -32.4% 22.9% -2.3% 10 3-5%accuracy Hindi -30.0% 31.2% Degradation Hebrew -27.2% 19.8% 43% -2.0% 30% German -30.1% Percentageof Translated Language Tokens in Input Prompt -2.0% 35.4% Arabic -39.6% 5.6% 50% −40% -30% −20% −10% 0% 10% 20% 30% 40% GAIA/OpenDeepResearch Agent Security Bench/ASB Math/MathChat SWE-Bench/SWE-Agent Multilingual Effect Where $M _ { e n }$ denotes the performance in English, n is the number of non-English languages in the dataset (in our case $n = 1 0 \$ ), and $M _ { \mathrm { l a n g } _ { i } }$ represents the performance in the i-th non-English language. # 4.2 Results Figure 3 presents the performance of open-source agents across all four benchmarks in English and the ten target languages. In GAIA and ASB, we observe clear performance and security drops: nonEnglish languages consistently underperformed compared to English, with reductions of up to $16 \%$ (GAIA) and a rise in vulnerability by up to $17 \%$ in ASB. Notably, SWE-bench and MATH exhibit only minor variation across languages, with most scores clustering around the English baseline. These results reveal important differences in how multilingual degradation manifests across task types. Although all tasks require complex reasoning, some are more constrained than others. For instance, SWE-bench is limited to well-structured Python patches designed to fix specific test cases. As a result, the reliance on natural language explanations is reduced, with greater emphasis placed on strict Pythonic syntax and code correctness. On the other hand, GAIA focuses on solving real-world tasks with much more flexibility. Thus, the importance of the natural language problem statement is significantly higher. Additionally, in benchmarks like MATH and SWE-bench, the opportunity for translation is inherently limited, as a large portion of the input consists of mathematical notation or source code, thus naturally reducing the multilinguality effect. To understand this variation, we examine a potential driver: the proportion of localized, target-language-oriented input in each benchmark in each benchmark’s input. Interestingly, we observe that Japanese yields the lowest ASR (attack success rate) in the ASB benchmark, indicating the highest robustness to adversarial inputs. This result can be partially attributed to the fact that the ASB agent was implemented using the Qwen2 model [Yang et al., 2024a], which is known for its strong alignment for Japanese language tasks. Qwen2 has consistently demonstrated strong performance in Japanese-specific LLM benchmarks and leaderboards7, suggesting that alignment and fine-tuning in a particular language can significantly enhance resilience against multilingual adversarial prompts. This reinforces the importance of language-specific alignment training in the development of robust and secure agentic systems. Figure 4 examines the relationship between prompt composition and multilingual performance. Part (a) shows a correlation between the percentage of non-English tokens in the input and the average performance gap (relative to English) across all four datasets. Benchmarks with higher proportions of localized, target-language-oriented input, such as GAIA and ASB, exhibit greater degradation, whereas SWE-bench, with predominantly English input (e.g., code), shows higher preservation. From part (b), we can see that there is no clear correlation between multilingual security robustness (ASB) and multilingual performance degradation. This disconnect is especially clear in real-world, language-heavy tasks like GAIA, where performance drops sharply, while structured tasks like SWEbench and MATH remain largely unaffected. This highlights that multilingual security alignment does not directly track with multilingual task accuracy, notably in language-rich agentic tasks. # 5 Discussion This section presents practical recommendations for multilingual agent deployment and directions for advancing the benchmark in future work. # 5.1 Guidelines for Multilingual Evaluation and Risk Assessment Language-Aware Deployment Guidelines. Before deploying an AI agent in a multilingual setting, analyze the linguistic composition of its expected input, particularly the balance between structured elements (e.g., code, formal queries) and localized natural language. Inputs with a high proportion of non-English content, especially those involving less formalized or more natural language, tend to increase the risk of performance and safety degradation. We therefore recommend that for any such case, developers should conduct a Multilingual Benchmark Assessment using a diverse, languagesensitive evaluation suite, such as ours, for AI agents operating across languages. This helps reveal hidden vulnerabilities and promotes reliable real-world behavior in multilingual conditions. Prioritize Multilingual Adaptation by Task Type. Our findings suggest that the need for multilingual adaptation in agentic systems should be guided by task type. For structured tasks with minimal linguistic variability, such as coding, cross-lingual transfer can often be achieved with minimal adjustment. However, for complex, real-world tasks or safety-critical decisions (e.g., GAIA, ASB), multilingual robustness remains limited, and thus, dedicated multilingual alignment and adaptation are essential. MAPS offers a practical framework to identify where multilingual adaptation is needed, helping prioritize resource allocation for post-training based on task-specific language sensitivity. Multilingual Inputs Amplify Agentic Security Vulnerabilities. Our evaluation on ASB revealed that multilingual adversarial inputs can bypass agent safety mechanisms with minimal sophistication. Direct translations of English jailbreak prompts—without any adaptation or obfuscation—were sufficient to induce policy-violating behavior in multiple languages. This highlights a critical risk: even simple adversarial prompts become significantly more effective when the input is localized, and are often sufficient to exploit security vulnerabilities in AI agents. Developers of safety-critical agentic systems should treat multilingual robustness as a core security concern and include translated prompts in safety evaluations using benchmarks like ours. # 5.2 Benchmark Limitation While MAPS represents the first multilingual suite for evaluating agentic AI systems, there are natural opportunities for future expansion. The current release includes four datasets, one agent per dataset, and ten target languages, offering a strong foundation for assessing multilingual robustness. Extending coverage to additional domains such as healthcare or legal reasoning, as well as incorporating multiple agents and extremely low-resource languages (e.g., Amharic or Uyghur), would further enhance the benchmark’s scope and relevance. Nonetheless, the current suite already surfaces clear trends in performance and security degradation across languages, offering valuable insights for guiding multilingual deployment. We view this work as a meaningful starting point and invite the community to build on our open-source release to advance more inclusive and resilient agentic AI systems.
Agentic AI systems, which build on Large Language Models (LLMs) and interact with tools and memory, have rapidly advanced in capability and scope. Yet, since LLMs have been shown to struggle in multilingual settings, typically resulting in lower performance and reduced safety, agentic systems risk inheriting these limitations. This raises concerns about the global accessibility of such systems, as users interacting in languages other than English may encounter unreliable or security-critical agent behavior. Despite growing interest in evaluating agentic AI, existing benchmarks focus exclusively on English, leaving multilingual settings unexplored. To address this gap, we propose MAPS, a multilingual benchmark suite designed to evaluate agentic AI systems across diverse languages and tasks. MAPS builds on four widely used agentic benchmarks - GAIA (real-world tasks), SWE-bench (code generation), MATH (mathematical reasoning), and the Agent Security Benchmark (security). We translate each dataset into ten diverse languages, resulting in 805 unique tasks and 8,855 total language-specific instances. Our benchmark suite enables a systematic analysis of how multilingual contexts affect agent performance and robustness. Empirically, we observe consistent degradation in both performance and security when transitioning from English to other languages, with severity varying by task and correlating with the amount of translated input. Building on these findings, we provide actionable recommendations to guide agentic AI systems development and assessment under multilingual settings. This work establishes a standardized evaluation framework, encouraging future research towards equitable, reliable, and globally accessible agentic AI. MAPS benchmark suite is publicly available at https://huggingface.co/datasets/Fujitsu-FRE/MAPS
[ "cs.DB", "cs.CL", "cs.CR" ]
# 1 Introduction In recent years, the integration of toolchains [1, 2, 3, 4] and iterative reasoning [5, 6, 7, 8] has significantly enhanced large language models (LLMs) in code-related tasks [9, 10, 11]. These advancements have enabled LLMs to proficiently complete code snippets [12, 13], debug errors [14], and even address complex machine learning problems [15, 16]. However, when confronted with real-world challenges that necessitate task-driven code repositories [17], they struggle. At present, tackling such tasks remains largely manual and time-consuming due to the complexity and scale of the required code, which makes purely generative approaches impractical [11, 18, 19]. To overcome this, we propose a paradigm shift: reuse and adapt existing repositories as modular components tailored to specific tasks. This approach not only mitigates the challenges associated with repository-level code generation but also supports the broader goal of enabling agents to autonomously address sophisticated tasks using simple natural language instructions [20, 21]. To facilitate this approach, leveraging platforms like GitHub becomes crucial. With over 28 million public repositories out of 190 million total projects, GitHub offers an extensive library of ready-made solutions for code agents [17, 18, 22, 23]. Developers frequently reuse these repositories to tackle complex problems, yet LLM-based systems still falter in fully automating this process. Although frameworks like OpenHands [24] and SWE-Agent [14] demonstrate strong general capabilities, they often stumble on real-world codebases. In practice, simply following README instructions seldom works: READMEs can be vague, incomplete, or even erroneous, and repositories are not guaranteed to match a task’s requirements out of the box—commands may need parameter changes, and key files can be misplaced. Consequently, when agents fail to locate or execute the necessary code, they must adapt by modifying existing components or generating new code to bridge the gap. To achieve it, agents need to understand the repository in a task-driven way. However, GitHub repositories often have two key properties that make this hard: (1) intricate structural complexity, with many interconnected files, classes, and functions, and (2) information density that exceeds the context limits of most LLMs. Existing frameworks [14, 15, 24, 25] do not provide mechanisms for grasping repository structures, tracking detailed dependencies, or strategically managing information within these constraints, ultimately resulting in suboptimal performance and higher token cost. In this paper, we introduce RepoMaster, an end-to-end agent framework designed for automating the use of code repositories to tackle complex tasks. To address these challenges, RepoMaster draws inspiration from human programmers, who rarely read every line of code or error log when exploring unfamiliar codebases. Instead, they first map a project’s structure, start viewing a key file, then jump to its relevant files based on signals like error traces, and filter out irrelevant details. Following this intuition, RepoMaster first performs hierarchical structure analysis, builds dependency and call graphs, and identifies core components as the initial context. Navigated by these connections, it progressively explores the repository and applies information selection when viewing files and execution feedback to keep each interaction concise. By iteratively applying these steps, RepoMaster mimics human prioritization and makes efficient use of limited context windows. When evaluated on both MLE-R—a revised version of MLE-Bench-Lite [16]—and our newly constructed GitTaskBench [26], RepoMaster achieves significantly higher completion and success rates than OpenHands and SWE-Agent, while using far fewer tokens. Our contributions are summarized as follows: (1) We propose a novel automated framework, RepoMaster, that can effectively leverage code repositories to solve the complex real-world tasks end-to-end. (2) To efficiently comprehend code in a goal-oriented, human-like manner, we integrate hybrid structural hierarchy modeling with core component identification, context-aware code exploration, and efficient information selection. (3) We validate RepoMaster’s effectiveness and efficiency against Openhands and SWE-agent through experiments on diverse complex tasks from the MLE-R and GitTaskBench. # 2 Related Work # 2.1 Code Generation LLMs have made substantial progress in code generation [12, 13, 27, 28], exemplified by closedsource models [29, 30, 31] and the open-source series [32, 33, 34, 35]. Beyond basic code completion [36], modern LLMs now support advanced tasks such as semantic code editing [23, 37], debugging [38], and generating machine learning pipelines (e.g., AIDE [25] and MLAB [15] for (1) Repository Search (2) Hierarchical Repository Analysis # user’s intent Extract “I want to remove scratches # key entities from this old image.” O 10 O ? --) 8 18 Search and Select Hierarchical Module Function Call Code Tree (HCT) Graph (MDG) Dependency Graph (FCG) RepoMaster README file preparing Star number Core components (file, class) “I want to remove scratches (3) Autonomous Exploration & Execution from this old image.” Granular Code View Agent Initial components T 田 SDeaprecnhdency Analysis Gy produce new action Exploration tools Explore the Repo RepoMaster Search Actions → Observations Trace 市 iew Analyse Feedback & Optimize Iteratively optimize Expand viewed scope Kaggle competitions). However, fully automating the creation of complex real-world codebases from scratch remains a critical challenge for AI agents [16, 19, 22]. # 2.2 LLM-based Agents for Tool Use External tools are essential for extending the capabilities of LLM agents [5, 39, 40]. Relying on executable code [6, 9]—using scripts to import inherent libraries, or call APIs, functionalized tools—has become a mainstream paradigm. Current works mainly focus on “tool learning” [1, 6, 10], but the more essential aspect of where to find the right tools is relatively overlooked [41]. Benchmarks, such as API-Bank [42] and ToolEyes [43], synthesize function libraries but are not realistic or practical; platforms such as RapidAPI [44] host real services but are closed-source and hard to extend. Standards such as FastAPI [45] or MCP [46], which unify interfaces for tool use via function calling mechanisms, have emerged. However, GitHub—a rich and dynamic ecosystem for automatically creating tools—remains underutilized in this context. Although GitAgent [17] first explored GitHub repositories as a tool extension, it is limited by simplistic repository search and understanding, and lacks validation in diverse real-world scenarios. # 2.3 Repository Utilization Using GitHub repositories to solve complex real-world tasks presents significant challenges. RepoAgent [47] produces high-level documentation but fails to include realistic, task-oriented usage examples. ML-Bench-A [18] focuses on setting up the environment rather than understanding the repository. OpenHands [24] and SWE-Agent [14] are strong general agents that use step-by-step prompting to break down tasks and write code, but they lack methods to deeply understand the repository structure or build a clear hierarchy of its components. Aider [48] can track file dependencies but misses detailed function-level connections and cannot autonomously explore the codebase. Interactive assistants like Copilot [49] and Cursor [50] are effective for small-to-medium projects but struggle in large-scale repository contexts due to limited dependency awareness. # 3 Method Most current frameworks follow the CodeAct paradigm [9, 14, 21, 24], offering basic file-editing and exploration commands (e.g., OpenHands’ AgentSkills [24] and SWE-Agent’s command set [14]). But relying on README-based mappings and simple find/edit operations misses many core components and cannot perform deeper, autonomous exploration within limited LLM contexts. In contrast, RepoMaster mimics human programmers by performing a static, structure-aware analysis to locate critical components, then dynamically selecting only the essential snippets—skipping irrelevant information and focusing the LLM’s limited context on what matters. The full end-to-end RepoMaster framework consists of three stages: (1) Repository Search: Identifying repositories relevant to the task. (2) Hierarchical Repository Analysis: Preparing the structures for exploration. (3) Autonomous Exploration & Execution: Iteratively interact with the repository and adjust exploration actions based on execution feedback. An overview of the framework is provided in Figure 1. # 3.1 Repository Search To address complex online tasks expressed in natural language, we develop a deep-search method to locate the GitHub repositories most relevant to the task. We begin by analyzing the user’s intent and extracting key entities to target the suitable repositories. We examine their README file and star count to assess their relevance and potential, and provide a brief description. Then, we select them by content quality and practical utility. Finally, we validate the top three candidates and deliver the results as structured JSON. An example of the deep-searching log is shown in Appendix B. # 3.2 Hierarchical Repository Analysis # 3.2.1 Hybrid Structural Repository Mapping An essential prerequisite for task-oriented repository automation is a comprehensive structural model of the codebase. We sanitize the repository by removing all non-source files, retaining only executable .py files. For each retained file, we perform a single Abstract Syntax Tree (AST) walk [51] to recursively harvest both the meta-information and the raw source snippet of every module, class, and function. These atomic units provide the basis for understanding the repository’s structure. Let the target repository be denoted $\mathcal { R } = \langle M , C , F , \mathcal { Z } \rangle$ , where $M = \{ m _ { 1 } , \dots , m _ { | M | } \}$ is the set of modules (one per .py file), $C = \{ c _ { 1 } , \ldots , c _ { | C | } \}$ the set of classes, $F = \{ f _ { 1 } , \dots , f _ { | F | } \}$ the set of functions/methods, and ${ \mathcal { T } } \subseteq M \times M$ the explicit import relations captured from source files. On this foundation, we construct three complementary artefacts: • Hierarchical Code Tree (HCT). $\tau$ , a nested package $$ module $$ class $$ function containment map annotated with line counts and docstring snippets. • Function Call Graph (FCG). $G _ { f } = ( V _ { f } = F , E _ { f } , w _ { f } )$ , where an edge $( f _ { i } , f _ { j } ) \in E _ { f }$ exists if $f _ { i }$ invokes $f _ { j }$ ; the weight $\boldsymbol { w _ { f } }$ encodes call frequency. • Module Dependency Graph (MDG). $G _ { m } = ( V _ { m } = M , E _ { m } , w _ { m } )$ , in which $( m _ { i } , m _ { j } ) \in$ $E _ { m }$ if $m _ { i }$ explicitly depends on $m _ { j } ; w _ { m }$ measures coupling strength. We thus obtain the tuple $\left. M , C , F , \mathbb { Z } , G _ { f } , G _ { m } , { \mathcal { T } } \right.$ , providing the agent with a deterministic, lossminimal structural synopsis of the entire repository before any task-specific exploration. # 3.2.2 Core Component Identification Having obtained a fine-grained yet verbose structural synopsis of the repository, we now need to compress this information into a concise context that preserves only the most influential code entities– small enough for multiple interaction turns within the LLM’s window, yet rich enough to preserve global semantics. To this end, we specify an importance scoring scheme that operates first at the module level and then propagates to classes. Module-level scoring. Each module $m \in M$ receives a score $I ( m ) \in [ 0 , 1 0 ]$ by linearly aggregating six orthogonal features, $$ \begin{array} { c } { s = \bigl [ \mathrm { D e p e n d e n c y } , \mathrm { C o m p l e x i t y } , \mathrm { U s a g e } , \mathrm { S e m a n t i c } , \mathrm { D o c } , \mathrm { G e l } , } \\ { I ( m ) = \operatorname* { m i n } \Bigl ( \sum _ { i = 1 } ^ { 6 } w _ { i } s _ { i } ( m ) , 1 0 \Bigr ) , \qquad w _ { i } \equiv 1 , } \end{array} $$ where Dependency captures centrality in MDG using the personalized PageRank [52] algorithm, Complexity approximates cyclomatic complexity, $U$ sage measures import and call frequency, Semantic flags high-value keywords (e.g., main, core), Doc quantifies docstring richness, and Git reflects commit volume and recency. Detailed formulas for each feature are deferred to Appendix F. Initial Context Context-aware Code Exploration To view efficiently: Information Selection README.md 3. Codes Documents Logs Brief summary of this 而币 f Match “eqxupelroyre” LO Exceptioin Module Summary 4. Add into Context … Rankt 4 File “/x.p 中圆圆圆 5. Exploration Tool Calling 7. het … lin Code Snippets (3.) ASTs Segement Match Traceback Search Key Entities 6. Q Module Paths Trace Dependency (2.) 中中 Interactive Feedback-based Execution Prompt: C Remove this image’s scratches Microsoft/Bringing-old-photos-back-to-life Input/Output path Initial Core components understand the repo and O search the kyewords, view the 心 view directory structures Code output: Comparison image saved plan execution steps… 南 README.md, download xx.pth Iwtrlonogk.sLliektemtheecphaetchkm…ay be as 'output.png' sgetnuerpateenvpirocnemsse_nitmvagriea.pblyesa,nd write… check “pth exists”, edit & rerun process_image.py… Checking output files: 1 install missing dependencies… output_result/ final_output/ DeScratch_01_input.jpeg stage_1_restore_output/ origiDenS/cratch_01_input.jpeg masks/ mask/ DeScratch_01_input.png input/ Model checkpoint is not found. Output files:[] No such file or directory DeScratch_01_input.jpeg Original Image Output Image Class-level refinement. Module scores serve as priors for class importance. For every class $c$ located in module $\mu ( c )$ , we compute $$ J ( c ) = I { \big ( } \mu ( c ) { \big ) } + { \frac { | F _ { c } | } { \operatorname* { m a x } _ { c ^ { \prime } } | F _ { c ^ { \prime } } | } } + { \frac { \operatorname { C a l l s } ( F _ { c } ) } { \operatorname* { m a x } _ { c ^ { \prime } } \operatorname { C a l l s } ( F _ { c ^ { \prime } } ) } } , $$ where $F _ { c }$ denotes the method set of class $c$ . The second term rewards class richness in functionality; the third term captures how often its methods are actually invoked in the repository. Classes are ranked according to $J ( c )$ , and the top- $k$ classes are selected as the repository’s core components. # 3.2.3 Repository Context Initialization Building on the identified core components, we construct an initial repository context in four distinct blocks. First, we include the complete README.md file, which provides high-level descriptions and detailed usage guidance authored by human developers. Second, we append a series of concise natural-language summaries for the highest-priority modules, giving the LLM a brief overview of each critical script’s purpose. Third, we provide the source code of core components (i.e., the classes scored and selected in Section 3.2.2) as fine-grained semantic anchors. Finally, for all other top-ranked modules, we provide a flat, directory-grouped list of their file paths for easy on-demand lookup. Figure 2 illustrates the initial context, and Appendix D provides a complete example of this initial repository context construction. This structured context serves as the agent’s "launchpad" for dynamic exploration, allowing it to prioritize high-impact modules, trace dependencies, formulate targeted code queries and select relevant classes or functions, bridging static analysis with dynamic reasoning and task execution. # 3.3 Autonomous Exploration & Execution # 3.3.1 Context-aware Code Exploration Once the agent has internalized the repository’s functionality and overall structure, it immediately transitions to dynamic analysis, performing an autonomous, hierarchical and graph-based traversal of the codebase. To support in-depth comprehension and effective utilization of the repository, we offer a suite of fine-grained exploration tools organized into three categories: Granular Code View, Dependency Analysis, and Search. • Granular Code View. This tool enables the agent to inspect the implementation details of files, classes, and functions using the HCT. It also retrieves and exposes the repository’s directory hierarchy, facilitating swift orientation within the codebase. • Dependency Analysis. This tool traces call chains and dependency paths by analyzing the FCG and MDG, respectively. It uncovers complex invocation and dependency relationships among code entities, thereby deepening the agent’s comprehension of module interactions and overall code structure. • Search. This tool equips the agent with robust search capabilities, facilitating rapid location of specific code segments within large and intricate codebases. It employs keyword matching to ensure efficient retrieval of relevant entities. Together, these tools empower AI agents to proactively and autonomously navigate and examine code repositories, achieving a level of comprehension and flexibility comparable to human developers. Empirically, we observe that complex repositories typically require detailed dependency analysis using FCG and MDG, whereas simpler repositories often allow agents to effectively rely on HCT. # 3.3.2 Interactive Feedback-based Execution Task execution is grounded in the agent’s evolving understanding of the repository. Once the agent has identified the hybrid structural elements described in Section 3.2.1 and core components described in Section 3.2.2 relevant to a given task, it begins to perform task-oriented operations. Crucially, execution and exploration form a continuous, interleaved loop rather than a linear sequence. The agent can fluidly alternate between writing code and locating files, viewing content and reading logs, or tracing dependencies, all driven by the task context across different interaction turns, and powered by the exploration tools described in Section 3.3.1. This flexible loop allows the agent to iteratively refine its behavior by retrieving just-in-time information from the codebase. Figure 2 illustrates the execution and exploration pipeline of RepoMaster. # 3.3.3 Context-aware Information Selection For Efficient Viewing The agent must juggle source code, documentation, execution results and logs within a tight LLM token window for multiple turns, making it difficult to maintain a globally coherent view of the repository and severely limiting its applicability to large projects. To mitigate this issue, we propose a multi-level content reduction strategy that retains only the most critical information. Viewing Code. At the code level, the agent parses source files into Abstract Syntax Trees (ASTs), extracts semantically and structurally meaningful subtrees, and uses these extracted subtrees as inputs. Viewing Documents. For large or unstructured artifacts (e.g., .txt or .csv files), the agent divides each file into fixed-length chunks of $L _ { c }$ tokens, It then generates retrieval prompts tailored to the current subtask, ranks the chunks by relevance, and retains top $n _ { c }$ most relevant segments. Viewing Feedback Logs. At the log level, we apply a human-like debugging heuristic that retains only the opening and closing segments of the log (where command invocations, exception traces, and diagnostic results cluster) and discards verbose intermediate output. Multi-level reduction strategies activate only when the combined size of all candidate inputs exceeds the per-interaction token limit $L$ , preserving global coherence by focusing on high-impact information and ensuring each execution-loop step relies on a compact, relevant context. # 4 Experiments # 4.1 Benchmarks and Metrics To validate the effectiveness of RepoMaster, we evaluate it using two benchmarks. MLE-R. The original MLE-Bench [16] derives from Kaggle competitions, designed to evaluate LLM agents’ capabilities in end-to-end machine learning engineering tasks. To construct MLE-R, we select 22 MLE-Bench tasks (covering nearly all MLE-Bench-lite cases) and apply the search procedure described in Section 3.1 to retrieve suitable GitHub repositories for each task, ensuring a fair comparison 1; the tasks’ requirements are set to be completed based on their chosen repository rather than generating code from scratch. Performance in MLE-R is evaluated using a medal-based system, the same as the original MLEBench, where solutions are assessed based on official Kaggle thresholds 2 for gold, silver, and bronze medals. Metrics include the achieved score, medal thresholds, and medal qualification, providing a clear indication of the model’s proficiency in competitive ML engineering tasks. GitTaskBench. In contrast to MLE-R, which emphasizes standard machine learning tasks (e.g., image classification), our new proposed GitTaskBench [26] 3 benchmark evaluates LLM agents on more practical real-world problems–common tasks whose complexity or format largely demands leveraging existing repositories, such as photo restoration. The benchmark consists of 18 repositories and 54 tasks, all described in natural language and designed to be completed using the provided repositories across a wide range of domains, such as image processing, video analysis, speech, physiological signals, office automation, and security and privacy. GitTaskBench evaluates two key aspects: Execution Completion Rate (measuring the model’s ability to leverage the repository for output) and Task Pass Rate (assessing whether the output meets task-specific evaluation criteria). Given the diversity of tasks, evaluation metrics are predefined and tailored within the benchmark, ensuring a comprehensive assessment. Note that total tokens include both input and output tokens. # 4.2 Evaluation Setup We evaluate our approach against two baseline frameworks and compare the performance across three state-of-the-art LLMs. The evaluation setup is as detailed below. # Baseline Frameworks. We evaluate two baseline frameworks: OpenHands [24] and SWE-agent [14]. OpenHands provides sandboxed environments for code execution and API interactions, while SWE-agent focuses on autonomous GitHub issue resolution. Large Language Models. We evaluate multiple leading LLMs, including the closed-source GPT4o-2024-08-06 [53] and Claude-3-5-sonnet-20241022 [54], as well as the open-source DeepSeek V3-0324 [55]. This setup enables a comprehensive assessment of both agent architectures and LLM capabilities on solving real-world tasks with repository utilization. Implementation Details. Our proposed solution RepoMaster is built on a multi-agent dialog platform AutoGen [21]. To ensure agent performance, we set a few key hyperparameters. Specifically, we set the maximum token length per interaction $L$ to 8000 tokens. For initial context construction, we generate concise summaries for the top 20 modules by importance score and extract $k = 1 0$ key classes. During the feedback phase, unstructured text files are split into chunks of $L _ { c } = 1 0 0 0$ tokens, retaining the $n _ { c } = 4$ most relevant segments. # 4.3 Comparison with SOTA On the MLE-R benchmark, RepoMaster with Claude 3.5 attains a $9 5 . 4 5 \%$ valid submission rate and a $2 7 . 2 7 \%$ medal acquisition rate (including $2 2 . 7 3 \%$ gold medals), representing a more than five-fold improvement over the best open-source Agent baseline. RepoMaster with GPT-4o also achieves a strong $8 6 . 3 6 \%$ valid submission rate and $1 8 . 1 8 \%$ medal rate, further confirming its robust performance advantage under varied settings. RepoMaster’s significant performance improvement stems primarily from its effective identification and utilization of core components within open-source repositories, such as neural network architecture designs, optimized hyperparameter configurations, and data preprocessing pipelines. In contrast, baseline methods like OpenHands and SWE-Agent often struggle to pinpoint critical modules during repository exploration, filling limited context windows with excessive irrelevant code, resulting in insufficient understanding of model architectures and training logic. Table 1: Performance comparison of different frameworks and LLMs on MLE-R. The best performance is bolded, and the second-best is underlined.; the same is below. In the GitTaskBench evaluation, RepoMaster significantly outperforms existing open-source frameworks SWE-Agent and OpenHands. Based on Claude 3.5, RepoMaster achieves a $7 5 . 9 2 \%$ execution completion rate and $6 2 . 9 6 \%$ task pass rate, surpassing OpenHands $( 4 8 . 1 5 \%$ , $2 4 . 0 7 \%$ ) and SWE-Agent $4 4 . 4 4 \%$ , $1 4 . 8 1 \%$ ). Similarly, RepoMaster maintains significant advantages on GPT-4o and DeepSeek V3, demonstrating that RepoMaster’s inherent capabilities have good universality across underlying models. More importantly, RepoMaster substantially reduces computational overhead, with token consumption when using Claude 3.5 approximately $9 5 \%$ lower than OpenHands (150k vs $3 0 0 0 \mathrm { k }$ tokens/task), proving the effectiveness of our hybrid hierarchical structure analysis and information pruning strategies. Table 2: Performance comparison of different frameworks and LLMs on GitTaskBench. # 4.4 Insightful analysis Ablation Study To quantitatively assess the contribution of each component in RepoMaster, we conduct a comprehensive ablation study on the GitTaskBench benchmark using GPT-4o as the underlying model. By systematically removing key mechanisms, we measure their impact on three metrics of effectiveness and efficiency: execution completion rate, task pass rate, and token usage. The results are shown in Table 3. Hybrid Hierarchical Analysis: Removing this component causes slight decreases in execution completion and task pass rates, with other components partially compensating. Token usage increases by $9 . 6 7 \%$ , showing that without structural repository understanding, more exploration is required. Context-aware Code Exploration: This component provides a suite of exploration tools enabling agents to autonomously navigate codebases according to task requirements, inspect file contents, and trace call chains and dependencies. The experiments prove this component to be particularly critical, with its removal causing significant decreases in execution and pass rates. More notably, token usage increases substantially, validating our hypothesis that strategic exploration effectively reduces repository exploration needs, allowing the agent to precisely target task-relevant components. Table 3: Ablation study on the impact of core mechanisms in RepoMaster. The "w/o All" refers to RepoMaster without any of the three mechanisms. “Exec. Comp.” stands for “Execution Completion.” “Estimate the 3D human pose from the video’s 2D RepoMaster Goal-unoriented Openhands Limited repository SWE-Agent keypoints using the provided VideoPose3D repository” code modifications understanding 8 Highly exploratory 1 Highly adaptable Analyzing, : H Building… o 中 品 Based on the README.md, I understand the 22Firmsto,dleutl’sesmalkoeasdureed.o r1e0 cirloansmsenst sl oseatduepdc. r8re3 lfyuanncdttihoatn swelhoavdeeadll.the required data. rPerqoucieredminegntste…pLbeyt’stseept…up the environments. tNhoewV,ildet'osPfiorset3reDardetphoesitRoEryAtDoMunEd.emrdstfainled fhrowm Next, we’ll verify that the necessary files are present. PyTorch GPU installation failed. to use it. We need to 1…2…3... (/> A\*r\*g\*u\*m\* nStusg:ge{s"tpeadt t"o:ol“ cViadlelo: olsies3tD_"r}epository_dependency \*\*\*\*\* Iunsnteaclelsesda CyPUu viensrtsailoln sauncdcersesifnusltlayl,l CreAattienmgp\`tpinrogcteossu_saen\`dT_epmrepdoircat.lpMy\`odseclr\`ip…t… steps were still executed. All required files are present. Inspecting the contents of the input data file…… … (/> MnoadmueldeN'ottoFrocuhn'dError: No module </> A\*r\*g\*u\*m\* nStusg:ge{s"tfeidl t_opoalt c":al“l.: VsiedaerocPh & vDi_e0w1_fiinlpuet_.cnopnzt"e}nt \*\*\*\*\* RetryingKe3yDs oisn etshtiemantipoz …fichlec: the input data. Skip package installation. Error: The input .npz file does not contain a key named 'positions_2d'. ['positions_2d', 'metadata’], both Switch to a NumPy-only implementation: Modifying \`load_2d_data()\` to print the .npz file contents and identify the correct key are 0-D arrays (scalars). Removed all PyTorch \`TemporalModel\` calls for loading 2D keypoint data… Traceback (most recent call last): 四 NOPPrZio fieinslasele ckeokyenpytopeionnittnsst: bhsaohpxaepe:es (,1(01'0s,0e 2m)1e 2s'), 'keypoints', 'metadata'] ? Actu\`adal [2'Dmectaodoartad']i.inteatme()s\`.: conflict lVianleup\~eo\~2Es\~4re\~,rs\~o_\~ir3\~n:d\~[^<s:^me,^ot :^ui^ln2^eg]>a=n aeryrpaoyi ts 四 Modifying \`load_2d_data()\` to print the .npz file contents and identify the correct key for … loading 2D keypoint data, iterate through data structure analysis and type conversion… Read the run.py, locate the ‘fetch’ function, Re </> Success: extract 2D coordinates using frame[1][0, :, :2] edit the function in run.py extract pose data Let's check its structure. [Errno 2] No such file or directory: ‘./checkpoint/pretrained.bin' directly from \`keypoints['keypoints']['custom’]\`, Edit the file again to handle instead of \`keypoints[subject]\` this data structure correctly. Checking if the 'checkpoint' directory exists…Directory not found. Creating directory… KeyError resolved. Attempting to download the model… Direct patch applied to fetch() to 因 3D poses saved to /output.npz 四 F\*i\*n\*d\*\* hveieuwr_lreopfostihteorcyh_ecsktprouicntturie & seaRrEcAhD\*M\*E\*.m\*d\*… → accommodate new data format Shape of output: (100, 3, 3) … … : Task successfully completed. failed X failed code analysis done ≈ 17 turns ≈ 140 turns ≈ 20 turns Information Selection: This component applies multi-level content reduction strategies to extract key information from source code, documents, and execution logs, preventing the context window from being filled with irrelevant content. Removing it decreases performance similarly to removing hybrid hierarchical analysis, with token usage increasing by only $6 . 0 0 \%$ . Its main value is maintaining a high signal-to-noise ratio rather than reducing token consumption. The most revealing comparison is between the full RepoMaster system and the base code agent without any of our proposed components. The baseline achieves only $2 9 . 6 2 \%$ execution completion and $2 4 . 0 7 \%$ task pass rates—decreases of $1 8 . 5 2 \%$ and $1 6 . 6 7 \%$ . Interestingly, the baseline’s token usage is significantly lower, but this reflects a failure case rather than efficiency: the agent simply gives up earlier without the necessary tools to effectively explore and utilize the repository. Further analysis of the failure modes in ablated systems reveals: Without hybrid hierarchical analysis, the agent struggles to locate key repository components, often getting lost in non-essential files; without context-aware exploration, the agent frequently explores irrelevant parts of the repository, resulting in context fragmentation and redundant exploration; without information selection, the agent’s context window becomes cluttered with low-value information, causing it to miss important details in error messages and execution traces. # 4.5 Case Study For the case study, we evaluated RepoMaster against OpenHands and SWE-Agent on a challenging 3D pose estimation task from GitTaskBench. As shown in Figure 3, neither baseline completed the task due to different failure modes. OpenHands ran extensive trial-and-error iterations $( \sim 1 4 0$ attempts, ${ \tt > } 1 0 \times$ others) and consumed higher tokens without success. SWE-Agent, although quicker, lacked task-level repository understanding—treating each error as a standalone fix and defaulting to a coarse 3D pose method that strayed from the core algorithm, causing task degradation. In contrast, RepoMaster leveraged structured repository analysis to efficiently focus on key components, achieving successful task completion with fewer attempts ( ${ < } 2 0$ iterations).
The ultimate goal of code agents is to solve complex tasks autonomously. Although large language models (LLMs) have made substantial progress in code generation, real-world tasks typically demand full-fledged code repositories rather than simple scripts. Building such repositories from scratch remains a major challenge. Fortunately, GitHub hosts a vast, evolving collection of open-source repositories, which developers frequently reuse as modular components for complex tasks. Yet, existing frameworks like OpenHands and SWE-Agent still struggle to effectively leverage these valuable resources. Relying solely on README files provides insufficient guidance, and deeper exploration reveals two core obstacles: overwhelming information and tangled dependencies of repositories, both constrained by the limited context windows of current LLMs. To tackle these issues, we propose RepoMaster, an autonomous agent framework designed to explore and reuse GitHub repositories for solving complex tasks. For efficient understanding, RepoMaster constructs function-call graphs, module-dependency graphs, and hierarchical code trees to identify essential components, providing only identified core elements to the LLMs rather than the entire repository. During autonomous execution, it progressively explores related components using our exploration tools and prunes information to optimize context usage. Evaluated on the adjusted MLE-bench, RepoMaster achieves a 110% relative boost in valid submissions over the strongest baseline OpenHands. On our newly released GitTaskBench, RepoMaster lifts the task-pass rate from 24.1% to 62.9% while reducing token usage by 95%. Our code and demonstration materials are publicly available at https://github.com/wanghuacan/RepoMaster.
[ "cs.SE", "cs.AI" ]
# 1. Introduction Zeroth-Order Optimization (ZOO) provides powerful tools for optimizing functions where explicit gradients are unavailable or expensive to compute. However, the underlying mechanisms of popular ZOO methods, particularly those employing randomized finite differences, and their connection to other optimization paradigms like Reinforcement Learning (RL) are not fully elucidated. This paper establishes a fundamental and previously unrecognized connection: ZOO with finite differences is equivalent to a specific instance of single-step Policy Optimization (PO). We formally unveil that the implicitly smoothed objective function optimized by common ZOO algorithms is identical to a single-step PO objective. Furthermore, we show that widely used ZOO gradient estimators, are mathematically equivalent to the REINFORCE gradient estimator with a specific baseline function, revealing the variancereducing mechanism in ZOO from a PO perspective.Built on this unified framework, we propose ZoAR (Zeroth-Order Optimization with Averaged Baseline and Query Reuse), a novel ZOO algorithm incorporating PO-inspired variance reduction techniques: an averaged baseline from recent evaluations and query reuse analogous to experience replay. Our theoretical analysis further substantiates these techniques reduce variance and enhance convergence. Extensive empirical studies validate our theory and demonstrate that ZoAR significantly outperforms other methods in terms of convergence speed and final performance. Overall, our work provides a new theoretical lens for understanding ZOO and offers practical algorithmic improvements derived from its connection to PO. Zeroth-Order Optimization (ZOO) addresses the task of optimizing objectives $F ( \pmb \theta ) = \mathbb { E } _ { \xi } [ f ( \pmb \theta ; \xi ) ]$ using only function evaluations, bypassing the need for explicit gradients (Nesterov & Spokoiny, 2017; Ghadimi & Lan, 2013). This paradigm is essential in numerous domains where gradients are intractable, computationally prohibitive, or simply unavailable, such as hyperparameter optimization (Gu et al., 2021), derivative-free reinforcement learning (Salimans et al., 2017), communication-efficient federated learning (Shu et al., 2024), black-box adversarial attacks (Shu et al., 2023; 2025b), prompt optimization (Hu et al., 2024; Zhan et al., 2024), and memory-efficient finetuning for large language models (LLMs) (Malladi et al., 2023; Zhang et al., 2024). A dominant strategy within ZOO involves estimating gradients via randomized finite differences, which implicitly optimize a smoothed surrogate $F _ { \mu } ( \pmb \theta )$ of the original objective $F ( \pmb \theta )$ (Nesterov & Spokoiny, 2017; Shu et al., 2025b). A thorough discussion on the most related works of ZOO is in Appx. A. While foundational, these methods often suffer from high variance in their gradient estimates, potentially impeding convergence speed and solution quality. Furthermore, a deep theoretical understanding connecting these ZOO techniques to established principles in related fields like Reinforcement Learning (RL) remains underdeveloped. In parallel, Policy Optimization (PO) forms the bedrock of modern RL, seeking policy parameters $\pmb \theta$ to maximize expected cumulative rewards $J ( \pmb \theta )$ (Sutton et al., 1999; Sutton & Barto, 2018). Policy Gradient (PG) algorithms like REINFORCE (Williams, 1992) estimate $\nabla J ( \pmb \theta )$ from trajectory rollouts. A crucial technique for stabilizing PG methods is baseline subtraction, which provably reduces gradient estimate variance and thereby accelerates learning (Sutton & Barto, 2018). As the first primary contribution, this paper establishes a fundamental and previously unrecognized connection: smoothed Zeroth-Order Optimization (ZOO) with finite differences is formally equivalent to a specific instance of single-step Policy Optimization $( P O )$ . We bridge these two fields, providing theoretical clarification for ZOO mechanisms (Sec. 3): First, we formally unveil that the smoothed objective $F _ { \mu } ( \pmb \theta )$ implicitly targeted by common ZOO methods is identical to a single-step PO objective $J ( \pmb \theta )$ under a specific reward definition (Thm. 3.1). Second, we prove for the first time that the standard Gaussian-smoothed ZOO gradient estimator is mathematically equivalent to the singlestep REINFORCE estimator using the function value $f ( \pmb \theta ; \pmb \xi )$ as a baseline (Thm. 3.2). This novel interpretation recasts the standard ZOO baseline subtraction not merely as a finite-difference artifact, but as a principled variance reduction technique rooted in PO theory, revealing the variancereducing mechanism in ZOO from a PO lens. Third, we further extend this foundational equivalence using importance sampling (Thm. 3.3), clarifying how ZOO estimators with alternative sampling distributions relate to weighted REINFORCE and optimize distinct smoothed objectives. Building upon this newly established unified PO framework, our second primary contribution is ZoAR (Zeroth-Order Optimization with Averaged Baseline and Query Reuse) proposed in Sec. 4. ZoAR is the first to integrate two POinspired variance reduction techniques directly into conventional ZOO (see Sec. 4.1): (a) Averaged Baseline: Instead of the high-variance single-point estimate $f ( \pmb \theta ; \pmb \xi )$ , ZoAR introduces an averaged baseline from recent function evaluations in a history buffer. This novel ZOO adaptation of the value function estimation in PO provides a more stable Monte Carlo estimate of the smoothed objective $F _ { \mu } ( \pmb \theta )$ . (b) Query Reuse: ZoAR computes gradient estimates using all samples in the history buffer (analogous to the experience replay in PO), effectively increasing the batch size for gradient estimation without new queries per iteration, thus enhancing sample efficiency and mitigating variance. We further provide rigorous theoretical analysis in Appx. B to support the variance reduction effect of these two newly introduced PO-inspired techniques from the lens of ZOO theory and show the potentially improved convergence of ZoAR when variance dominates. Our third contribution lies in comprehensive empirical validation (Sec. 5). We benchmark ZoAR against other ZOO baselines, across standard synthetic functions, a black-box adversarial attack task, and memory-efficient finetuning of LLMs. The results consistently show that ZoAR achieves significant improvements in convergence rate and final performance, validating the practical efficacy of leveraging these newly connected PO techniques for ZOO. Notably, substantial gains are observed even with our novel averaged baseline alone, highlighting its distinct effectiveness. # 2. Preliminaries This section introduces the necessary background on ZerothOrder Optimization (ZOO) and Policy Optimization (PO) in Reinforcement Learning (RL), establishing the notation and core concepts used throughout the paper. Problem Setup. We focus on the problem of minimizing a potentially non-convex objective function $F ( \pmb \theta )$ defined as an expectation over a random variable $\xi$ : $$ \operatorname* { m i n } _ { \pmb { \theta } \in \mathbb { R } ^ { d } } F ( \pmb { \theta } ) \triangleq \mathbb { E } _ { \xi } \left[ f ( \pmb { \theta } ; \xi ) \right] \ . $$ Here, $\pmb { \theta } \in \mathbb { R } ^ { d }$ represents the $d$ -dimensional parameter vector we aim to optimize, $f ( \pmb \theta ; \pmb \xi )$ is a scalar-valued loss function whose evaluation depends on both the parameters $\pmb \theta$ and a random variable $\xi$ . The defining characteristic of the ZerothOrder (ZO) setting is the constraint that we can only access stochastic evaluations of the function value, $f ( \pmb \theta ; \pmb \xi )$ , through a black-box oracle. Importantly, direct access to the gradient $\nabla _ { \pmb { \theta } } f ( \pmb { \theta } ; \pmb { \xi } )$ is assumed to be unavailable or computationally prohibitive. Throughout this paper, we use $\nabla$ to denote the gradient with respect to the parameters $\pmb \theta$ , i.e., $\nabla \equiv \nabla _ { \pmb { \theta } }$ . Zeroth-Order Optimization. To optimize (1) without explicit gradients, ZOO algorithms employ gradient estimators constructed solely from function evaluations. A prevalent technique is randomized finite differences. A common form of such an estimator, averaged over $K$ directions is: $$ { \hat { \nabla } } F ( \theta ) \triangleq { \frac { 1 } { K } } \sum _ { k = 1 } ^ { K } { \frac { f ( \theta + \mu \mathbf { u } _ { k } ; \xi ) - f ( \theta ; \xi ) } { \mu } } \mathbf { u } _ { k } \ . $$ where $\{ \mathbf { u } _ { k } \} _ { k = 1 } ^ { K }$ are i.i.d. random direction vectors, $\mu > 0$ is a small smoothing radius parameter, and $K \geq 1$ dictates the number of function queries used per gradient estimate (beyond the baseline evaluation $f ( \pmb \theta ; \pmb \xi ) )$ . Standard choices for the distribution of $\mathbf { u } _ { k }$ include: (I) The standard multivariate Gaussian distribution $\mathbf { u } _ { k } \sim \mathcal { N } ( \mathbf { 0 } , \mathbf { I } _ { d } )$ (Nesterov & Spokoiny, 2017). (II) The uniform distribution over the unit sphere ${ \bf u } _ { k } \sim$ $\mathrm { U n i f } ( \mathbb { S } ^ { d - 1 } )$ (Flaxman et al., 2005a). (III) The uniform distribution over the standard basis vectors $\mathbf { u } _ { k } \sim \mathrm { U n i f } ( \{ e _ { 1 } , \ldots , e _ { d } \} )$ (Lian et al., 2016). It is well-established that (2) is an unbiased gradient estimation of a smoothed approximation $F _ { \mu }$ (defined as below) for the original objective $F ( \pmb \theta )$ (Nesterov & Spokoiny, 2017; Shu et al., 2025b). This means that ZOO with estimator (2) is in fact implicitly optimizing the smoothed objective $F _ { \mu }$ . $$ F _ { \mu } ( \pmb \theta ) \triangleq \mathbb { E } _ { \mathbf { u } } \left[ F ( \pmb \theta + \mu \mathbf { u } ) \right] = \mathbb { E } _ { \mathbf { u } } \left[ \mathbb { E } _ { \pmb \xi } \left[ f ( \pmb \theta + \mu \mathbf { u } ; \pmb \xi ) \right] \right] . $$ Policy Optimization and REINFORCE. In policy optimization, the objective is typically to find the parameters $\pmb \theta$ of a stochastic policy $\pi _ { \boldsymbol { \theta } } ( a | \boldsymbol { s } )$ that maximize the expected cumulative reward. Let us consider the standard episodic setting. The objective function, $J ( \pmb \theta )$ , is the expected total discounted reward obtained by executing the policy $\pi _ { \boldsymbol { \theta } }$ starting from an initial state distribution $\rho _ { 0 } ( s _ { 0 } )$ : $$ \begin{array} { l } { \displaystyle { \cal J } ( \pmb { \theta } ) \triangleq \mathbb { E } _ { \tau \sim p _ { \pmb { \theta } } ( \tau ) } \left[ \sum _ { { t = 0 } } ^ { T - 1 } \gamma ^ { t } R ( S _ { t } , A _ { t } ) \right] } \\ { \displaystyle = \mathbb { E } _ { S _ { 0 } \sim \rho _ { 0 } , A _ { t } \sim \pi _ { \pmb { \theta } } ( \cdot | S _ { t } ) , S _ { t + 1 } \sim P ( \cdot | S _ { t } , A _ { t } ) } \left[ \sum _ { { t = 0 } } ^ { T - 1 } \gamma ^ { t } R ( S _ { t } , A _ { t } ) \right] . } \end{array} $$ Here, $\tau = ( S _ { 0 } , A _ { 0 } , R _ { 0 } , \ldots , S _ { T - 1 } , A _ { T - 1 } , R _ { T - 1 } , S _ { T } )$ represents a trajectory (or episode) of states $S _ { t }$ , actions $A _ { t }$ , and rewards $R _ { t } = R ( S _ { t } , A _ { t } )$ . The trajectory distribution $p _ { \pmb { \theta } } ( \tau )$ is induced by the policy $\pi _ { \boldsymbol { \theta } }$ and the transition dynamics $P ( S _ { t + 1 } | S _ { t } , A _ { t } )$ of environment. $\gamma \in [ 0 , 1 ]$ is the discount factor, and $T$ is the episode horizon (which can be finite or infinite). Note that while policy optimization typically involves maximization, we can frame it as minimization by considering the negative reward (cost), i.e., minimizing $- J ( \pmb \theta )$ , to align with the optimization setup in (1). Policy Gradient methods are a class of algorithms designed to optimize $J ( \pmb \theta )$ by estimating its gradient $\nabla J ( \pmb \theta )$ and performing gradient ascent (or descent on $- J ( \pmb \theta ) )$ . The Policy Gradient Theorem (Sutton et al., 1999) provides the analytical form of this gradient and a widely used policy gradient is derived from the REINFORCE (w/ baseline) algorithm (Williams, 1992): $$ \nabla J ( \pmb \theta ) = \mathbb { E } _ { \tau \sim p _ { \pmb \theta } ( \tau ) } \left[ \sum _ { t = 0 } ^ { T - 1 } \nabla \ln \pi _ { \pmb \theta } ( A _ { t } | S _ { t } ) \left( G _ { t } - b ( S _ { t } ) \right) \right] $$ where $\begin{array} { r } { G _ { t } \ = \ \sum _ { t ^ { \prime } = t } ^ { T - 1 } \gamma ^ { t ^ { \prime } - t } R ( S _ { t ^ { \prime } } , A _ { t ^ { \prime } } ) } \end{array}$ represents the discounted retu -to-go from time step $t$ and the statedependent baseline $b ( S _ { t } )$ is applied for variance reduction. # 3. A Policy Optimization Framework for Zeroth-Order Optimization Building on the preliminaries in Sec. 2, this section formally establishes the connection between Zeroth-Order Optimization (ZOO) and Policy Optimization (PO). We demonstrate that the ZOO problem can be precisely framed as a singlestep PO problem (Sec. 3.1). Furthermore, we show that common ZOO gradient estimators are equivalent to specific instances of the REINFORCE algorithm with a baseline (Sec. 3.2 & Sec. 3.3). # 3.1. Equivalence of Objectives in ZOO and PO We begin by demonstrating the equivalence between the objective function implicitly optimized by many ZOO methods, i.e., $F _ { \mu } ( \pmb \theta )$ in (3), and a specific instance of the PO objective. Formally, consider the standard PO objective from (4) in a simplified, single-step episodic setting (i.e., $T = 1 , \gamma = 0 \mathrm { , }$ . In this scenario, the agent takes a single action $\mathbf { x }$ sampled from a policy $\pi _ { \pmb { \theta } } ( \mathbf { x } )$ , and receives a reward based on this action. To align with the minimization problem (1), we define the reward as the negative function value, $R _ { 0 } = - F ( \mathbf { x } )$ . The PO objective is then to minimize the expected negative reward: $$ J ( \pmb \theta ) \triangleq \mathbb { E } _ { \mathbf { x } \sim \pi _ { \pmb \theta } ( \mathbf { x } ) } \left[ F ( \mathbf { x } ) \right] = \mathbb { E } _ { \mathbf { x } \sim \pi _ { \pmb \theta } ( \mathbf { x } ) } \left[ \mathbb { E } _ { \pmb \xi } \left[ f ( \pmb \theta ; \pmb \xi ) \right] \right] . $$ The connection between the ZOO smoothed objective $F _ { \mu } ( \pmb \theta )$ defined in (3) and this single-step PO objective $J ( \pmb \theta )$ defined in (6) is formalized below (proof in Appx. C.1). Theorem 3.1 (Objective Equivalence). Let the policy $\pi _ { \boldsymbol { \theta } } ( \mathbf { x } )$ be defined via the reparameterization $\mathbf { x } = { \pmb \theta } + \mu \mathbf { u } ,$ where u is a random vector drawn from a distribution $p ( \mathbf { u } )$ independent of $\pmb \theta$ . Then, the single-step $P O$ objective $J ( \pmb \theta )$ defined in (6) is identical to the ZOO smoothed objective $F _ { \mu } ( \pmb \theta )$ defined in (3) using the same distribution $p ( \mathbf { u } )$ , i.e., $$ J ( \pmb \theta ) = F _ { \mu } ( \pmb \theta ) . $$ Remark. Thm. 3.1 establishes that optimizing the smoothed function $F _ { \mu } ( \pmb \theta )$ , a standard practice in ZOO theory, is equivalent to optimizing a single-step RL objective $J ( \pmb \theta )$ where the policy samples perturbations around the current parameters $\pmb \theta$ . This equivalence allows us to leverage concepts and algorithms from PO to understand and potentially improve ZOO methods (see Sec. 4). The choice of the smoothing distribution $p ( \mathbf { u } )$ in ZOO corresponds to the choice of the exploration strategy (policy structure) in this PO context. To the best of our knowledge, this is the first to explicitly interpret the ZOO smoothed objective through this specific PO lens. # 3.2. Gaussian Smoothing as Single-Step REINFORCE w/ Baseline We now demonstrate that the widely used Gaussiansmoothed ZOO gradient estimator is equivalent to a specific instance of the REINFORCE w/ baseline algorithm. Let the smoothing distribution be the standard multivariate Gaussian, $p ( \mathbf { u } ) = \mathcal { N } ( \mathbf { 0 } , \mathbf { I } _ { d } )$ . The corresponding policy $\pi _ { \boldsymbol { \theta } } ( \mathbf { x } )$ samples $\mathbf { x } = \pmb { \theta } + \mu \mathbf { u }$ , which means $\mathbf { x } \sim \mathcal { N } ( \pmb { \theta } , \mu ^ { 2 } \mathbf { I } _ { d } )$ . To minimize $F _ { \mu } ( \pmb \theta ) = J ( \pmb \theta )$ , We apply the REINFORCE w/ baseline algorithm using the policy gradient theorem (5). For our single-step case $\begin{array} { r } { T = 1 } \end{array}$ ), the policy gradient gives: $$ \begin{array} { r } { \nabla J ( \pmb \theta ) = \mathbb { E } _ { \mathbf { x } \sim \pi _ { \pmb \theta } ( \mathbf { x } ) } \left[ \nabla \ln \pi _ { \pmb \theta } ( \mathbf { x } ) \left( \mathbb { E } _ { \xi } [ f ( \mathbf { x } ; \xi ) ] - b \right) \right] , } \end{array} $$ where $b$ is a baseline that is independent of the specific sample $\mathbf { x }$ . Particularly, for the Gaussian policy $\pi _ { \pmb { \theta } } ( \mathbf { x } ) =$ $\mathcal { N } ( \pmb { \theta } , \mu ^ { 2 } \mathbf { I } _ { d } )$ e,s:we have $\begin{array} { r } { \dot { \nabla } \ln \pi _ { \theta } ( \mathbf { x } ) = \frac { \mathbf { x } - \theta } { \mu ^ { 2 } } } \end{array}$ . Substituting this $$ \nabla J ( \pmb \theta ) = \mathbb { E } _ { \mathbf { x } \sim \pi _ { \pmb \theta } ( \mathbf { x } ) } \left[ \frac { \mathbf { x } - \pmb \theta } { \mu ^ { 2 } } \left( \mathbb { E } _ { \xi } [ f ( \mathbf { x } ; \xi ) ] - b \right) \right] . $$ In practice, the expectations are approximated using Monte Carlo sampling. Let $b = \mathbb { E } _ { \xi } \left[ b ( \xi ) \right]$ , we sample ${ \bf x } _ { k }$ from $\pi _ { \pmb { \theta } } ( \mathbf { x } )$ to estimate the outer expectation and $\xi$ to estimate the inner expectation. A common stochastic gradient estimator based on $K$ samples is then: $$ \hat { \nabla } _ { \mathrm { G S } } J ( \pmb \theta ) \triangleq \frac { 1 } { K } \sum _ { k = 1 } ^ { K } \frac { \mathbf { x } _ { k } - \pmb \theta } { \mu ^ { 2 } } \left( f ( \mathbf { x } _ { k } ; \pmb \xi ) - b ( \pmb \xi ) \right) . $$ The connection between the standard Gaussian-smoothed ZOO gradient estimator from (2) and the REINFORCE gradient estimator (9) is formalized below (proof in Appx. C.2). Theorem 3.2 (Gradient Estimator Equivalence for Gaussian). Let $\boldsymbol \pi _ { \pmb \theta } ( \mathbf { x } ) = \mathcal { N } ( \pmb \theta , \mu ^ { 2 } \mathbf { I } _ { d } )$ and $b ( \xi ) = f ( \pmb \theta ; \xi )$ in (9). Then, the REINFORCE gradient estimator (9) is identical to the Gaussian-smoothed ZOO gradient estimator (2), i.e., $$ \begin{array} { r } { \hat { \nabla } _ { \mathrm { G S } } J ( \pmb \theta ) = \hat { \nabla } F ( \pmb \theta ) . } \end{array} $$ Remark. Thm. 3.2 provides the first explicit interpretation of the common ZOO gradient estimator (2) from a novel PO lens. Specifically, it reveals that Gaussian-smoothed ZOO estimator can be interpreted as REINFORCE gradient estimator with gaussian policy. Moreover, it unveils that the subtraction of $f ( \pmb \theta ; \pmb \xi )$ in conventional ZOO is not merely a result from the first-order Taylor polynomial but corresponds precisely to using a baseline in the REINFORCE algorithm. This baseline is known to reduce the variance of the gradient estimate without introducing bias (Sutton & Barto, 2018). This perspective not only aligns with but also provides a theoretical support for observations in works like (Salimans et al., 2017) where similar estimators were used in the context of evolution strategies, highlighting the variance reduction benefit without explicitly linking it to the REINFORCE w/ baseline mechanism. # 3.3. Generalization Through Importance Sampling The previous section only established the equivalence for Gaussian smoothing, whereas ZOO methods can also apply other sampling distributions for $\mathbf { u } _ { k }$ , like the uniform distribution over the unit sphere or coordinate directions mentioned in Sec. 2. We hence generalize our PO perspective to encompass these cases using importance sampling (IS) in this section. Suppose we still consider the objective $J ( \pmb \theta )$ with the Gaussian policy $\boldsymbol \pi _ { \pmb \theta } ( \mathbf { x } ) = \mathcal { N } ( \pmb \theta , \mu ^ { 2 } \mathbf { I } _ { d } )$ , but we want to estimate its gradient using samples drawn from a different proposal distribution $p ( \mathbf { x } )$ . The policy gradient using importance sampling becomes: $$ \nabla J ( \pmb \theta ) = \mathbb { E } _ { \mathbf { x } \sim p ( \mathbf x ) } \left[ \frac { \pi _ { \pmb \theta } ( \mathbf x ) } { p ( \mathbf x ) } \nabla \ln \pi _ { \pmb \theta } ( \mathbf x ) \left( \mathbb { E } _ { \xi } [ f ( \mathbf x ; \xi ) ] - b \right) \right] . $$ Similar to (9), by substituting $\begin{array} { r } { \nabla \ln \pi _ { \theta } ( { \bf x } ) = \frac { { \bf x } - \theta } { \mu ^ { 2 } } , b = } \end{array}$ $\mathbb { E } _ { \xi } \left[ b ( \xi ) \right]$ and using Monte Carlo approximation with samples $\mathbf { x } _ { k } \sim p ( \mathbf { x } )$ , we get the stochastic gradient estimator: $$ \hat { \nabla } _ { \mathrm { I S } } J ( \pmb \theta ) \triangleq \frac { 1 } { K } \sum _ { k = 1 } ^ { K } \frac { \pi _ { \pmb \theta } ( \mathbf { x } _ { k } ) } { p ( \mathbf { x } _ { k } ) } \frac { \mathbf { x } _ { k } - \pmb \theta } { \mu ^ { 2 } } \left( f ( \mathbf { x } _ { k } ; \pmb \xi ) - b ( \pmb \xi ) \right) ~ . $$ The connection between the ZOO gradient estimator under various sampling distributions from (2) and the IS-based REINFORCE gradient estimator (11) is formalized below (proof in Appx. C.3). Theorem 3.3 (Extended Gradient Estimator Equivalence). Let $\pi _ { \pmb { \theta } } ( \mathbf { x } ) = \mathcal { N } ( \pmb { \theta } , \mu ^ { 2 } \mathbf { I } _ { d } ) .$ , $p ( \mathbf { x } ) = p ( \pmb { \theta } + \mu \mathbf { u } )$ , and $b ( \xi ) =$ $f ( \pmb \theta ; \pmb \xi )$ in (11). IS-based REINFORCE gradient estimator (11) is identical to a scaled ZOO gradient estimator (2) for the three different distributions of ${ \bf u } _ { k }$ in Sec. 2: $$ \hat { \nabla } _ { \mathrm { I S } } J ( \pmb \theta ) = \gamma \hat { \nabla } F ( \pmb \theta ) . $$ Particularly, let $\Gamma ( \cdot )$ be the Gamma function, (I) if ${ \bf u } _ { k } \sim$ $\mathcal { N } ( \mathbf { 0 } , \mathbf { I } _ { d } )$ , $\gamma \ = \ 1$ ; (II) if $\begin{array} { r } { { \mathbf { u } } _ { k } \ \sim \ { \mathrm { ~ U n i f } } ( { \mathbb { S } } ^ { d - 1 } ) , } \end{array}$ $\gamma \ =$ 21−d/2 exp(−1/2) ; (III) if uk ∼ Unif({e1, . . . , ed}), γ = $\frac { d \exp ( - 1 / 2 ) } { ( 2 \pi \mu ^ { 2 } ) ^ { d / 2 } }$ Remark. Thm. 3.3 reveals that ZOO estimators employing non-Gaussian sampling distributions for ${ \bf u } _ { k }$ (e.g., uniform on sphere or coordinate-wise) can also be interpreted as REINFORCE gradient estimators through the lens of importance sampling. Specifically, the ZOO gradient $\hat { \nabla } F ( \pmb \theta )$ (unbiased for its own smoothed objective $F _ { \mu } ( \pmb \theta )$ with the non-Gaussian $p ( \mathbf { u } ) _ { . } ^ { }$ ) remains equivalent to an IS-based REINFORCE estimator for $J ( \pmb \theta )$ with the Gaussian policy scaled by $\gamma$ . This scaling factor $\gamma$ arises from the implicit importance weights between the Gaussian policy for $J ( \pmb \theta )$ and the ZOO proposal distribution $p ( \mathbf { u } )$ . This perspective unifies diverse ZOO sampling strategies under the REINFORCE lens, provides a principled reason for the learning rate adjustments in Cor. 3.4, and further solidifies the fundamental equivalence between the convergence of ZOO and single-step PO. Corollary 3.4 (Convergence Equivalence). Under the same condition in Thm. 3.3, let baseline $b ( \xi )$ and update rule (e.g. gradient descent algorithm and Adam (Kingma $\mathcal { E } B a ,$ , 2015) algorithm) be the same for ZOO and REINFORCE, they achieve identical convergence when $$ \eta _ { R } = \eta _ { Z } / \gamma . $$ Here, $\gamma$ is from Thm. 3.3, $\eta _ { Z }$ and ηR are the learning rates of ZOO and REINFORCE, respectively. # 4. Zeroth-Order Optimization with Averaged Baseline and Query Reuse Leveraging the Policy Optimization (PO) framework established in Sec. 3, this section introduces ZoAR (Algo. 1), an improved Zeroth-Order Optimization (ZOO) algorithm. We illustrate in Sec. 4.1 how ZoAR incorporates PO-inspired variance reduction techniques, including an averaged baseline and query reuse, for enhanced efficiency. While Algo. 1 demonstrates these techniques using the update rule from $\mathcal { R }$ -AdaZO (Shu et al., 2025b), their core design is general and readily adaptable to other update rules like ZO-SGD (Ghadimi & Lan, 2013) and ZO-AdaMM (Chen et al., 2019). Furthermore, we provide theoretical analyses in ZOO theory to validate these PO-derived improvements in Appx. B. # 4.1. Algorithm Design We introduce the two key PO-inspired techniques in ZoAR (line 5 of Algo. 1), namely the averaged baseline and query reuse, below. Averaged Baseline. As established in Thm. 3.2, the standard Gaussian-smoothed ZOO gradient estimator (2) implicitly uses $f ( \pmb \theta ; \pmb \xi )$ as a baseline, corresponding to $b ( \xi ) =$ $f ( \pmb \theta ; \pmb \xi )$ in the REINFORCE framework (9). While this baseline helps reduce variance compared to no baseline, it may not be the most effective choice. In the single-step REINFORCE algorithm, the baseline that minimizes the variance of the gradient estimate $\nabla \ln \pi _ { \pmb { \theta } } ( \mathbf { x } ) ( R ( \mathbf { x } ) - b )$ is given by Ex∼πθ(x)[(∇ ln πθ(x))2R2(x)] . A simpler and widely used near-optimal baseline is the expected reward itself, $b = \mathbb { E } _ { \mathbf { x } \sim \pi _ { \pmb { \theta } } ( \mathbf { x } ) } [ R ( \mathbf { x } ) ]$ . In our ZOO context, where $R ( \mathbf { x } ) =$ $- F ( \mathbf { x } ) = - \mathbb { E } _ { \xi } [ f ( \mathbf { x } ; \boldsymbol { \xi } ) ]$ and $\mathbf { x } = \pmb { \theta } + \mu \mathbf { u }$ , this corresponds to choosing the baseline as $b = \mathbb { E } _ { \mathbf { x } \sim \pi _ { \pmb { \theta } } ( \mathbf { x } ) } [ F ( \mathbf { x } ) ] = F _ { \mu } ( \pmb { \theta } )$ The standard ZOO baseline $f ( \pmb \theta ; \pmb \xi )$ can be seen as a singlesample, zero-order approximation of $F _ { \mu } ( \pmb \theta )$ evaluated at the center point. Algo. 1 proposes using a more robust estimate of this expected value. Specifically, it computes the baseline $b _ { t }$ as the empirical average of function values obtained from recent queries stored in a history buffer $\mathcal { H } _ { t }$ : $$ b _ { t } \triangleq \frac { 1 } { \vert \mathscr { H } _ { t } \vert } \sum _ { ( \mathbf { u } , y ) \in \mathscr { H } _ { t } } y \ , $$ where $y = f ( \pmb { \theta } _ { t ^ { \prime } } + \mu \mathbf { u } ; \xi )$ for some past iteration $t ^ { \prime } \leq t - 1$ This average in fact serves as a Monte Carlo estimate of the expected function value $F _ { \mu } ( \pmb \theta )$ , potentially providing a lower-variance baseline compared to the single evaluation $f ( \pmb \theta ; \pmb \xi )$ used implicitly in (2), which we will verify in Appx. B. Query Reuse. To further enhance sample efficiency and reduce variance, Algo. 1 incorporates query reuse. This mirrors the concept of using off-policy data, common in algorithms like Proximal Policy Optimization (PPO) (Schulman et al., 2017), where experiences gathered under previous policies are reused to improve the current policy update, thereby increasing data efficiency. In our ZOO context, Algo. 1 maintains a history buffer $\mathcal { H } _ { t }$ containing the $N \times K$ most recent query results (pairs of perturbation vectors $\mathbf { u }$ and corresponding function values $y$ ). At iteration $t$ , $K$ new queries based on $\pmb { \theta } _ { t - 1 }$ are performed, added to the buffer, and the oldest $K$ queries are discarded. Crucially, the gradient estimate $\pmb { g } _ { t } \overset { \cdot } { = } \hat { \nabla } F ( \pmb { \theta } _ { t - 1 } )$ is then computed using all samples currently in the history $\mathcal { H } _ { t }$ : $$ \hat { \nabla } F ( \pmb \theta _ { t - 1 } ) \triangleq \frac { 1 } { | \mathcal { H } _ { t } | - 1 } \sum _ { ( \mathbf { u } , y ) \in \mathcal { H } _ { t } } \frac { y - b _ { t } } { \mu } \mathbf { u } \mathrm { ~ . ~ } $$ This approach uses all $| \mathcal { H } _ { t } | = N \times K$ samples, effectively increasing the gradient estimation batch size without additional queries beyond the initial $K$ . The resulting averaging over a larger set is expected to produce a gradient estimate with significantly lower variance (verified in Appx. B). Advantages. The proposed ZoAR algorithm offers several compelling advantages. (a) It provides significant variance reduction by employing an averaged baseline $b _ { t }$ and reusing historical queries from $\mathcal { H } _ { t }$ (see Appx. B) compared to conventional ZOO with finite difference (Nesterov & Spokoiny, 2017). (b) Compared to (Cheng et al., 2021; Wang et al., 2024), the algorithm maintains compelling computational and memory efficiency, as the overhead for managing the history buffer (using only random seeds like (Malladi et al., 2023; Shu et al., 2025a)) and performing the averaging calculations is generally modest, which is scaling linearly with history size. (c) ZoAR benefits from ease of Implementation, representing a straightforward modification to standard ZOO procedures by incorporating a buffer and simple averaging steps. (d) It offers enhanced sample efficiency and flexibility by leveraging the accumulated information in $\mathcal { H } _ { t }$ : a meaningful gradient estimate $\mathbf { \sigma } _ { \mathbf { \mathcal { G } } _ { t } }$ can be computed even if only a small number of new queries (potentially $K = 1$ ) are Quadratic Levy Rosenbrock Ackley Optimality Gap (log scale) 8 Bs 8.00 0000090000 oooo K\* αXX\*kXK\* 14 Q 8.00£ 水火火水 nn KVX 1.6 1 包的的的白白白 \*XXXXXXXXX火 6 ? 8 8 Geeop- 16.00 Vanilla 8 ?. 双 Q 4 种 ZRoeLHISZO t 白 7 && QQ Gos 12 Y Q 1.4 & Q 2 I0I ZoAR (N = 1) & ? & Q 3 ZoAR (N = 6) BBBB <IQI&/FePoPBβ? 10 <lrreraPaiPqa& 0 10000 20000 0 10000 20000 0 10000 20000 0 10000 20000 Iters (T) Iters (T) Iters (T) Iters (T) Table 1. Comparison of the minimal number of iterations to achieve a successful attack for different ZOO methods. Results are averaged over 5 runs. The speedup is compared against the Vanilla ZOO. performed at each iteration. These advantages make ZoAR a practical approach for improving ZOO performance, particularly in optimization settings where variance control and query efficiency is crucial. # 5. Experiments In this section, we conduct extensive experiments on synthetic functions (Sec. 5.1) and black-box adversarial attack (Sec. 5.2). More results, e.g., the equivalence between ZOO and REINFORCE, memory-efficient LLM fine-tuning, are in Appx. E. # 5.1. Synthetic Functions The Superiority of ZoAR. We subsequently evaluate the convergence rate and final performance of ZoAR against several baselines on four synthetic functions of dimensionality $d = 1 0 ^ { 4 }$ (detailed in Appx. D.2). The compared methods include Vanilla ZOO (Nesterov & Spokoiny, 2017), ReLIZO (Wang et al., 2024), and ZoHS (details in Appx. D.1). Fig. 1 presents the results using the ZO-AdaMM (Chen et al., 2019) update rule, while corresponding results under the $\mathcal { R }$ -AdaZO (Shu et al., 2025b) update rule are available in Appx. E.2. The results in Fig. 1 show that ZoAR consistently outperforms all baseline algorithms in both convergence speed and final optimization performance. Notably, ZoAR with $N = 6$ achieves an $8 \times$ speedup over Vanilla ZOO on the Quadratic and Rosenbrock functions, and a $1 6 \times$ speedup on the Ackley function. Moreover, comparing ZoAR with $N = 6$ (utilizing query reuse) against ZoAR with $N = 1$ (using only the averaged baseline) illustrates the significant additional benefit of historical information. # 5.2. Black-box Adversarial Attack We further evaluate the performance of ZoAR in the domain of black-box adversarial attacks, a prominent application of zeroth-order optimization (Cheng et al., 2021; Shu et al., 2023). In this scenario, the goal is to identify an optimal perturbation $\delta$ for a given input image $x$ such that a target black-box model misclassifies $x { + } \delta$ . Our experimental setup follows that introduced by (Shu et al., 2025b), targeting a convolutional neural network (CNN) trained on the MNIST dataset (Lecun et al., 1998) (more details in Appx. D.3). We assess algorithm efficiency by the minimum number of iterations required to achieve a successful attack. The comparison includes Vanilla ZOO and ZoHS, with each evaluated under both the ZO-AdaMM (Chen et al., 2019) and $\mathcal { R }$ -AdaZO (Shu et al., 2025b) update rules. ReLIZO is omitted from this comparison as it failed to achieve a successful attack within the maximum query budget. The results are summarized in Tab. 1, showing that ZoAR achieves the fastest attack success across both update rules. Specifically, under the ZO-AdaMM setting, ZoAR represents a $5 . 9 2 \times$ speedup compared to Vanilla ZOO. The less pronounced speedup of ZoAR with $\mathcal { R }$ -AdaZO (versus ZO-AdaMM) is likely due to the inherent gradient variance reduction of $\mathcal { R }$ - AdaZO (Shu et al., 2025b), which may reduce the marginal impact of additional variance mitigation from ZoAR.
Zeroth-Order Optimization (ZOO) provides powerful tools for optimizing functions where explicit gradients are unavailable or expensive to compute. However, the underlying mechanisms of popular ZOO methods, particularly those employing randomized finite differences, and their connection to other optimization paradigms like Reinforcement Learning (RL) are not fully elucidated. This paper establishes a fundamental and previously unrecognized connection: ZOO with finite differences is equivalent to a specific instance of single-step Policy Optimization (PO). We formally unveil that the implicitly smoothed objective function optimized by common ZOO algorithms is identical to a single-step PO objective. Furthermore, we show that widely used ZOO gradient estimators, are mathematically equivalent to the REINFORCE gradient estimator with a specific baseline function, revealing the variance-reducing mechanism in ZOO from a PO perspective.Built on this unified framework, we propose ZoAR (Zeroth-Order Optimization with Averaged Baseline and Query Reuse), a novel ZOO algorithm incorporating PO-inspired variance reduction techniques: an averaged baseline from recent evaluations and query reuse analogous to experience replay. Our theoretical analysis further substantiates these techniques reduce variance and enhance convergence. Extensive empirical studies validate our theory and demonstrate that ZoAR significantly outperforms other methods in terms of convergence speed and final performance. Overall, our work provides a new theoretical lens for understanding ZOO and offers practical algorithmic improvements derived from its connection to PO.
[ "cs.LG" ]
# I. INTRODUCTION The safe (i.e., state-constrained) domain of attraction (DOA) of a given dynamical system is the set of state values from which trajectories are guaranteed to converge to an equilibrium point of interest under the system’s dynamics, while satisfying specified state constraints. Such a set provides a safe operation region, while ensuring attractivity to the equilibrium point. DOAs are very prevalent, especially in safe stabilization scenarios, and that has motivated an immense amount of research work on computing or approximating DOAs. In this paper, we consider the problem of estimating the state-constrained DOA of a general discrete-time autonomous nonlinear system. In the literature, DOAs are predominantly estimated using the framework of Lyapunov functions, where candidate Lyapunov functions of fixed templates (e.g., quadratic forms and sum-of-squares polynomials [1]) are typically assumed. Then, the parameters of such templates are tuned to satisfy the standard Lyapunov conditions, or the more relaxed multi-step and non-monotonic Lyapunov conditions [2], [3]. Lyapunov-based approaches utilizing fixed templates are generally restrictive, providing, if existent, conservative estimates of DOAs [4]. Interestingly, if an initial certifiable DOA estimate is provided (e.g., using quadratic Lyapunov functions), DOAs can be underapproximated arbitrarily using iterative computations of the backward reachable set of the initial DOA estimate, where such iterations are guaranteed to converge to the exact DOA [5], [6]. However, the complexity, in terms of the set representation of each DOA estimate, increases with each iteration, making the resulting estimates impractical in formal verification tasks. Recently, there has been a growing interest in using learning-based approaches to estimate DOAs, where neural networks are trained to satisfy standard Lyapunov conditions and then verification tools (e.g., interval arithmetic and mixedinteger programming) are implemented to ensure that the trained neural networks provide certifiable DOA estimates [7]–[12]. Despite the high computational efficiency associated with training neural networks, neural network verification typically suffers from high computational demands due to statespace discretization. Additionally, the resulting DOA estimates do not significantly outperform the standard Lyapunovbased approaches using fixed templates. Interestingly, there have been promising developments in the field of neuralnetwork verification, where computationally efficient linear bound propagation and branch and bound have been utilized, enabling fast and scalable neural network verification [13]– [17]. For some classes of nonlinear systems with local exponential stability properties, DOAs can be characterized as sublevel sets of particular value functions, which are solutions to functional-type equations: the maximal Lyapunov and Zubov equations [5], [18]–[20]. Zubov equations are preferable when estimating DOAs as their solutions are bounded, where these solutions have been typically estimated numerically using discretization-based approximations [20] and sum-of-squares optimization [21], which are limited to low-dimensional systems. Still, the Zubov-based approaches are advantageous in the sense that, in theory, accurately approximating the solutions to the Zubov equation provides large DOA estimates. In this work, motivated by the utilities of neural-network approximations, the advancements in neural network verification, and the theoretical advantages associated with Zubovbased methods, we propose a DOA estimation framework for discrete-time autonomous nonlinear systems that relies on neural network approximations of solutions to a new Zubov equation that accounts for state constraints. Zubov equations have been developed for discrete-time nonlinear systems without [19] and with [20] state constraints. Interestingly, the framework in [20] even accounts for disturbances. The Zubov equation in [20] contains a non-smooth term (in terms of the min function) to account for state constraints. This non-smooth term can make neural network training, which typically relies on gradient-based optimization methods, more challenging. In this work, and by tailoring value functions that correspond to the safe DOA, we present a new Zubov equation that does not possess this non-smooth term, making it more suited for neural network training. In addition, the framework in [20] assumes boundedness of the safe domain in addition to strong Lipschitz-type conditions on the system’s dynamics and the state constraints, which we relax in our framework. Mere neural network approximate solutions to Zubov equations do not provide certifiable DOA estimates, as neural network training does not account for approximation errors. To obtain certifiable estimates from the neural-network approximations, we propose a verification framework, which is a discrete-time variation of the verification framework proposed in [22], where certifiable ellipsoidal DOA estimates and backward reachability computations are employed. This verification framework can be implemented using standard verification tools such as $\alpha , \beta$ -CROWN [14], [15], [17], [23]– [25] and dReal [26]. The organization of this paper is as follows. The necessary preliminaries and notation are introduced in Section II. The problem setup is introduced in Section III. Some properties of the safe DOA are discussed in Section IV. The new value functions are presented in Section V. The DOA characterization in terms of these value functions is introduced in Section VI. The properties of the value functions are presented in Section VII. The Zubov and Lyapunov functions corresponding to the value functions are discussed in Section VIII. The neural network approximation is presented in Section IX. The verification framework is presented in Section X. The proposed method is illustrated through three numerical examples in Section XI. Finally, the study is concluded in Section XII. # II. NOTATION AND PRELIMINARIES Let $\mathbb { R }$ $\mathbb { R } , \mathbb { R } _ { + } , \mathbb { Z }$ , and $\mathbb { Z } _ { + }$ denote the sets of real numbers, nonnegative real numbers, integers, and non-negative integers, respectively, and $\mathbb { N } = \mathbb { Z } _ { + } \setminus \{ 0 \}$ . Let $[ a , b ] , ] a , b [ , [ a , b [ ,$ , and $] a , b ]$ denote closed, open and half-open intervals, respectively, with end points $a$ and $b$ , and $[ a ; b ] , \ ] a ; b [ , \ [ a ; b [ .$ , and $] a ; b ]$ stand for their discrete counterparts, e.g., $[ a ; b ] = [ a , b ] \cap \mathbb { Z }$ , and $[ 1 ; 4 [ = \{ 1 , 2 , 3 \}$ . In $\mathbb { R } ^ { n }$ , the relations $< , \leq , \geq$ , and $>$ are defined component-wise, e.g., $a < b$ , where $a , b \in \mathbb { R } ^ { n }$ , iff $a _ { i } ~ < ~ b _ { i }$ for all $i \in [ 1 ; n ]$ . For $a , b \in \mathbb { R } ^ { n }$ , $a \ \leq \ b$ , the closed hyper-interval (or hyper-rectangle) $[ [ a , b ] ]$ denotes the set $\{ x \in \mathbb { R } ^ { n } \mid a \leq x \leq b \}$ . Let $\| \cdot \|$ and $\| \cdot \| _ { \infty }$ denote the Euclidean and maximal norms on $\mathbb { R } ^ { n }$ , respectively, and $\mathbb { B } _ { n }$ be the $n$ -dimensional closed unit ball induced by $\| \cdot \|$ . The $n$ - dimensional zero vector is denoted by $0 _ { n }$ . Let $\operatorname { i d } _ { n }$ denote the $n \times n$ identity matrix. For $A \in \mathbb { R } ^ { n \times m }$ , $\| A \|$ and $\| A \| _ { \infty }$ denote the matrix norms of $A$ induced by the Euclidean and maximal norms, respectively. Given $\boldsymbol { x } \in \mathbb { R } ^ { n }$ and $A \in \mathbb { R } ^ { n \times m }$ , $| x | \in \mathbb { R } _ { + } ^ { n }$ and $| A | \in \mathbb { R } _ { + } ^ { n \times m }$ are defined as $| x | _ { i } : = | x _ { i } | , \ i \in [ 1 ; n ] .$ and $| A | _ { i , j } : = | A _ { i , j } |$ , $( i , j ) \in [ 1 ; n ] \times [ 1 ; m ]$ , respectively. Let $S ^ { n }$ denote the set of $n \times n$ real symmetric matrices. Given $A \in S ^ { n }$ , $\underline { { \lambda } } ( A )$ and $\overline { { \lambda } } ( A )$ denote the minimum and maximum eigenvalues of $A$ , respectively. Let $S _ { + + } ^ { n }$ denote the set of $n \times n$ real symmetric positive definite matrices $\{ A \in S ^ { n } | \underline { { \lambda } } ( A ) > 0 \}$ . Given $\textit { A } \in \ S _ { + + } ^ { n }$ , $A ^ { \frac { 1 } { 2 } }$ denotes the unique real symmetric positive definite matrix $K$ satisfying $A \ = \ K ^ { 2 }$ [27, p. 220]. The interior and the boundary of $X \subseteq \mathbb { R } ^ { n }$ are denoted by $\operatorname { i n t } ( X )$ and $\partial X$ , respectively. Given $f \colon X \to Y$ and $P \subseteq X$ , the image of $f$ on $P$ is defined as $f ( P ) : = \{ f ( x ) | x \in P \}$ . Given $f \colon X \to X$ and $x \in X$ , $f ^ { 0 } ( x ) : = x$ , and for $M \in \mathbb { N }$ , we define $f ^ { M } ( x )$ recursively as follows: $f ^ { k } ( x ) \ = \ f ( f ^ { k - 1 } ( x ) )$ , $k \in [ 1 ; M ]$ . A subset $S \subseteq X$ is said to be invariant under a mapping $f \colon X \to X$ if $f ( X ) \subseteq X$ . # III. PROBLEM SETUP Consider the discrete-time system $$ x _ { k + 1 } = f ( x _ { k } ) , \ k \in \mathbb { Z } _ { + } , $$ where $\boldsymbol { x } _ { k } \in \mathbb { R } ^ { n }$ is the state and $f \colon { \mathbb { R } } ^ { n } \to { \mathbb { R } } ^ { n }$ is the system’s transition function. The trajectory of system (1) starting from $\boldsymbol { x } _ { 0 } \in \mathbb { R } ^ { n }$ is the function $\varphi _ { x } \colon \mathbb { Z } _ { + } \to \mathbb { R } ^ { n }$ , satisfying: $$ \begin{array} { c } { \varphi _ { x } ( 0 ) = x , } \\ { \varphi _ { x } ( k + 1 ) = f ( \varphi _ { x } ( k ) ) = f ^ { k + 1 } ( x ) , \ k \in \mathbb { Z } _ { + } . } \end{array} $$ Assumption $\mathbf { \xi } _ { I : \textit { f } }$ is continuous over $\mathbb { R } ^ { n } , 0 _ { n }$ is an equilibrium point of system (1) (i.e., $f ( 0 _ { n } ) = 0 _ { n } { \mathrm { ; } }$ ), and $0 _ { n }$ is locally exponentially stable. That is, there exist fixed parameters $r \in \ ] 0 , \infty [$ , $M \in [ 1 , \infty [$ , and $\lambda \in ] 0 , 1 [$ such that for all $x \in r \mathbb { B } _ { n }$ , $\| \varphi _ { x } ( k ) \| \leq M \lambda ^ { k } \| x \| , \ k \in \mathbb { Z } _ { + }$ . Let $\mathcal { X } \subseteq \mathbb { R } ^ { n }$ be a safe set. We make the following assumption. Assumption 2: $\chi$ is open and $0 _ { n } \in \mathcal { X }$ . Define the safe DOA inside $\chi$ as $$ \mathcal { D } _ { 0 } ^ { \mathcal { X } } : = \{ x \in \mathcal { X } \bigg | \varphi _ { x } ( k ) \in \mathcal { X } , \forall k \in \mathbb { Z } _ { + } , \operatorname* { l i m } _ { k \infty } \varphi _ { x } ( k ) = 0 _ { n } \} . $$ Any invariant subset of $\mathcal { D } _ { 0 } ^ { \mathcal { X } }$ under $f$ , containing $0 _ { n }$ in its interior is called a safe region of attraction (ROA) in $\mathcal { X }$ . Our goal is to compute a large safe ROA that closely approximates $\mathcal { D } _ { 0 } ^ { \mathcal { X } }$ # IV. PROPERTIES OF THE DOA In this section, we introduce some important properties of the safe DOA $\mathcal { D } _ { 0 } ^ { \mathcal { X } }$ , which will be utilized in the proofs of the main results of this work. Theorem 1: The set $\mathcal { D } _ { 0 } ^ { \mathcal { X } } \subseteq \mathcal { X }$ is nonempty, invariant under $f$ , and open. Proof: The non-emptiness follows from the fact that $0 _ { n } \in { \mathcal { D } } _ { 0 } ^ { \mathcal { X } }$ . The invariance property can be deduced as follows: for $x \in \mathcal { D } _ { 0 } ^ { \mathcal { X } }$ , $\varphi _ { x } ( k ) \in \mathcal { X } \ \forall k \in \mathbb { Z } _ { + }$ and $\begin{array} { r } { \operatorname* { l i m } _ { k \to \infty } \varphi _ { x } ( k ) = 0 _ { n } } \end{array}$ . This implies that $\varphi _ { x } ( k + 1 ) = \phi _ { f ( x ) } ( k ) \in \mathcal { X } \ \forall k \in \mathbb { Z } _ { + }$ and $\begin{array} { r } { \operatorname* { l i m } _ { k \to \infty } \varphi _ { x } ( k + 1 ) = \operatorname* { l i m } _ { k \to \infty } \varphi _ { f ( x ) } ( k ) = 0 _ { n } } \end{array}$ . Hence, $f ( x ) \in$ $\mathcal { D } _ { 0 } ^ { \mathcal { X } }$ Now, we prove that $\mathcal { D } _ { 0 } ^ { \mathcal { X } }$ is open. Recall the definitions of $M , \ r , \ \lambda$ in Assumption 1. Let $\theta \in ] 0 , \infty [$ be such that $\theta \mathbb { B } _ { n } \subseteq$ $\chi$ , which exists due to the openness of $\chi$ and the fact that $0 _ { n } \in \mathcal { X }$ . Fix $x _ { 0 } \in \mathcal { D } _ { 0 } ^ { \mathcal { X } }$ , then due to the convergence of $\varphi _ { x _ { 0 } }$ to $0 _ { n }$ within $\mathcal { X }$ , there exists $N \in \mathbb { Z } _ { + }$ such that $\varphi _ { x _ { 0 } } ( j ) \in$ $\mathcal { X } \ \forall j \ \in \ [ 0 ; N - 1 ]$ and $\varphi _ { x _ { 0 } } ( N ) \ \in \ { \frac { \tilde { r } } { 2 } } \mathbb { B }$ , where $\tilde { r }$ satisfies $0 < \tilde { r } \leq \operatorname* { m i n } \{ \frac { \theta } { M } , r \}$ . Let $\rho _ { j }$ , $j \in [ 0 ; N - 1 ]$ , be positive numbers satisfying: $\varphi _ { x _ { 0 } } ( j ) + \rho _ { j } \mathbb { B } _ { n } \subset \mathcal { X }$ , $j \in [ 0 ; N - 1 ]$ . Such numbers exist due to the openness of $\chi$ . Note that $\varphi _ { y } ( j ) =$ $f ^ { j } ( y )$ for all $\boldsymbol { y } \in \mathbb { R } ^ { n }$ and $j \in \mathbb { Z } _ { + }$ . As $f$ is continuous at $x _ { 0 }$ , $f ^ { 2 } , \ \dots \ , f ^ { N }$ are also continuous at $x _ { 0 }$ . Therefore, there exists $\delta \in ] 0 , \infty [$ such that, for all $x \in x _ { 0 } + \delta \mathbb { B } _ { n }$ , $\parallel f ^ { j } ( x ) -$ $f ^ { j } ( x _ { 0 } ) \| < \rho _ { j }$ , $j \in [ 0 ; N - 1 ]$ , and $\| f ^ { N } ( x ) - f ^ { N } ( x _ { 0 } ) \| <$ $\frac { \tilde { r } } { 2 }$ . Consequently, we have for all $x \in x _ { 0 } + \delta \mathbb { B } _ { n }$ , $\varphi _ { x } ( j ) =$ $\bar { f } ^ { j } ( x ) \in \varphi _ { x _ { 0 } } ( j ) + \rho _ { j } \mathbb { B } _ { n } \subset \mathcal { X }$ , $j \in [ 0 ; N - 1 ]$ , and $\| \varphi _ { x } ( N ) \| =$ $\| f ^ { N } ( x ) \| \leq \| f ^ { N } ( x ) - f ^ { N } ( x _ { 0 } ) \| + \| f ^ { N } ( x _ { 0 } ) \| \leq { \tilde { r } } \leq r .$ The local exponential stability indicates that, for all $x \in x _ { 0 } + \delta \mathbb { B } _ { n }$ , $\| f ^ { N + k } ( x ) \| \le M \| f ^ { N } ( x ) \| \lambda ^ { k } \le M \tilde { r } \lambda ^ { k }$ , $k \in \mathbb { Z } _ { + }$ . Hence, $\varphi _ { x } ( j ) = f ^ { j } ( x ) 0 _ { n }$ as $j \to \infty$ for all $x \in x _ { 0 } + \delta \mathbb { B } _ { n }$ . Also, the local exponential stability and the definition of $\tilde { r }$ imply that, for all $\begin{array} { r } { \in ~ x _ { 0 } + \delta \mathbb { B } _ { n } , ~ \| f ^ { N + k } ( x ) \| \le M { \tilde { r } } \le \theta . } \end{array}$ , $k \in$ $\mathbb { Z } _ { + } \ \Rightarrow \ \varphi _ { x } ( N + k ) \ = \ f ^ { N + k } ( x ) \ \in \ \theta \mathbb { B } _ { n } \ \subseteq \ \mathcal { X } \ \forall k \ \in \ \mathbb { Z } _ { + }$ . Therefore, $x \in \mathcal { D } _ { 0 } ^ { \mathcal { X } }$ for all $x \in x _ { 0 } + \delta \mathbb { B } _ { n }$ . As $x _ { 0 } \in \mathcal { D } _ { 0 } ^ { \mathcal { X } }$ is arbitrary, the proof is complete. # V. VALUE FUNCTIONS In this section, we introduce the value functions that can be used to characterize $\mathcal { D } _ { 0 } ^ { \mathcal { X } }$ and to derive Lyapunov and Zubov type equations. To this end, we let $\alpha \colon \mathbb { R } ^ { n } \to \mathbb { R } _ { + }$ be a positive definite continuous function such that $$ \alpha _ { m } \| \boldsymbol { x } \| ^ { 2 } \leq \alpha ( \boldsymbol { x } ) \leq \alpha _ { M } \| \boldsymbol { x } \| ^ { 2 } , \ \boldsymbol { x } \in \mathbb { R } ^ { n } , $$ for some $\alpha _ { m } , \alpha _ { M } \ \in \ ] 0 , \infty [$ [. The definition of $\alpha$ and the parameters $\alpha _ { m }$ and $\alpha _ { M }$ are fixed throughout the following discussion. Assumption 3: There exists a function $\gamma \colon \mathbb { R } ^ { n } \to \mathbb { R } _ { + } \cup \{ \infty \}$ satisfying: 1) $\gamma$ is finite and continuous over $\chi$ , and there exists $\underline { { \gamma } } \in$ $\mathbb { R } _ { + } \backslash \{ 0 \}$ such that $$ \gamma ( x ) \geq \underline { { \gamma } } \forall x \in \mathcal { X } , $$ 2) $\gamma ( x ) = \infty$ whenever $x \not \in \mathcal { X }$ , 3) for any sequence $\{ x _ { n } \}$ , with $x _ { n } \to x \in \partial \mathcal { X }$ , $\gamma ( x _ { n } ) $ $\infty$ . Remark $\boldsymbol { { \mathit { 1 } } }$ : If $\chi$ is a strict 1-sublevel set of a continuous function $g _ { \mathcal { X } } \colon { \mathbb { R } ^ { n } } \to \mathbb { R }$ , i.e., $\mathcal { X } = \{ x \in \mathbb { R } ^ { n } | g _ { \mathcal { X } } ( x ) < 1 \}$ , then we can define $$ \gamma ( x ) = 1 + \frac { 1 } { \mathrm { R e L u } ( 1 - g _ { \mathcal { X } } ( x ) ) } , $$ where $1 / 0 : = \infty$ and ReLu: $\mathbb { R } \mathbb { R }$ is the rectifier linear unit function defined as $\mathrm { R e L u } ( x ) = ( x + | x | ) / 2 , x \in$ $x \in \mathbb { R }$ . With this definition of $\gamma$ , the conditions of Assumption 3 hold with $\underline { { \gamma } } = 1$ We define the value functions $\mathcal { V } \colon \mathbb { R } ^ { n } \mathbb { R } _ { + } \cup \{ \infty \}$ and $\mathcal { W } \colon \mathbb { R } ^ { n } \to [ 0 , 1 ]$ as follows: $$ \mathcal { V } ( x ) : = \sum _ { k = 0 } ^ { \infty } \gamma ( \varphi _ { x } ( k ) ) \alpha ( \varphi _ { x } ( k ) ) , $$ and $$ \begin{array} { r } { \mathcal { W } ( x ) : = 1 - \exp ( - \mathcal { V } ( x ) ) , } \end{array} $$ where $\exp ( - \infty ) : = 0$ . # VI. CHARACTERIZING THE DOA USING THE VALUE FUNCTIONS In this section, we charcterize the safe DOA in terms of the sublevel sets corresponding tho the value functions $\nu$ and $\mathcal { W }$ . Theorem 2: $$ \begin{array} { r } { \mathcal { D } _ { 0 } ^ { \mathcal { X } } = \mathbb { V } _ { \infty } : = \left\{ x \in \mathbb { R } ^ { n } | \mathcal { V } ( x ) < \infty \right\} } \\ { = \mathbb { W } _ { 1 } : = \left\{ x \in \mathbb { R } ^ { n } | \mathcal { W } ( x ) < 1 \right\} . } \end{array} $$ Proof: Due to the one-to-one correspondence between the codomains of $\nu$ and $\boldsymbol { \mathcal { W } }$ using equation (3), it is sufficient to only show that $\mathbb { V } _ { \infty } ~ = ~ \mathcal { D } _ { 0 } ^ { \bar { \chi } }$ . Let $x ~ \in ~ \mathbb { V } _ { \infty }$ . We will show that $\varphi _ { x } ( k ) \in \mathcal { X }$ for all $k \in \mathbb { Z } _ { + }$ . By contradiction, assume that $\varphi _ { x } ( N ) \not \in \mathcal { X }$ for some $N \in \mathbb { Z } _ { + }$ . Then by the definition of $\gamma$ , $\gamma ( \varphi _ { x } ( N ) ) = \infty$ implying that $\mathcal { V } ( x ) = \infty$ , and that contradicts the fact that $x ~ \in ~ \mathbb { V } _ { \infty }$ . Hence, we have $\varphi _ { x } ( k ) \in \mathcal { X }$ for all $k \in \mathbb { Z } _ { + }$ . Using the lower bound on $\gamma$ , we have $\alpha ( \varphi _ { x } ( k ) ) \ \leq \ \gamma ( \varphi _ { x } ( k ) ) \alpha ( \varphi _ { x } ( k ) ) / \underline { { \gamma } } , \ k \ \in$ $\mathbb { Z } _ { + }$ . As $\begin{array} { r l } { \sum _ { k = 0 } ^ { \infty } \gamma ( \varphi _ { x } ( k ) ) \alpha ( \varphi _ { x } ( k ) ) } & { { } } \end{array}$ is convergent, then by the compar son test, $\scriptstyle \sum _ { k = 0 } ^ { \infty } \alpha ( \varphi _ { x } ( k ) )$ is also convergent, implying $\begin{array} { r } { \operatorname* { l i m } _ { k \to \infty } \alpha ( \varphi _ { x } ( k ) ) = 0 } \end{array}$ . Using the lower bound on $\alpha$ , we have $\begin{array} { r } { \operatorname* { l i m } _ { k \to \infty } \| \varphi _ { x } ( k ) \| ^ { 2 } \leq \operatorname* { l i m } _ { k \to \infty } \frac { 1 } { \alpha _ { m } } \alpha ( \varphi _ { x } ( k ) ) = 0 } \end{array}$ . Therefore, $x \in \mathcal { D } _ { 0 } ^ { \mathcal { X } }$ . Now, let $x \in \mathcal { D } _ { 0 } ^ { \mathcal { X } }$ and recall the definitions of $M , \ r , \ \lambda$ in Assumption 1. Note that $0 < \underline { { \gamma } } \leq \gamma ( \varphi _ { x } ( j ) ) < \infty$ for all $j \in \mathbb { Z } _ { + }$ as $\varphi _ { x } ( j ) \in \mathcal { X }$ , $j \in \mathbb { Z } _ { + }$ . Let $\theta \in \mathbf { \sigma } ] 0 , \infty [$ be such that $\theta \mathbb { B } _ { n } \subseteq \mathcal { X }$ , which exists due to the openness of $\chi$ and the fact that $0 _ { n } ~ \in ~ \mathcal { X }$ . Due to the convergence of $\varphi _ { x }$ to $0 _ { n }$ , there exists $N \in \mathbb { Z } _ { + }$ such that $\varphi _ { x } ( N ) = y \in \tilde { r } \mathbb { B } _ { n }$ , where $\mathrm { ~ 0 ~ < ~ } \tilde { r } \leq \operatorname* { m i n } \{ \theta / M , r \}$ . As $\begin{array} { r } { \varphi _ { x } ( N ) \in \textit { \textbf { r } } \mathbb { B } _ { n } . } \end{array}$ , the local exponential stability and the definition of $\tilde { r }$ imply that $\begin{array} { r c l r } { \| \varphi _ { x } ( N + k ) \| } & { = } & { \| \varphi _ { y } ( k ) \| ~ \le ~ M \lambda ^ { k } \| y \| ~ \le ~ } \end{array}$ $M \lambda ^ { k } \tilde { r } \ \leq \ \lambda ^ { \dot { k } } \theta \ \leq \ \theta$ , $k \in \mathbb { Z } _ { + }$ . Hence, $\alpha ( \varphi _ { x } ( N + k ) ) \ \leq$ $\alpha _ { M } \| \varphi _ { x } ( N + k ) \| ^ { 2 } \leq \alpha _ { M } \lambda ^ { 2 k } \theta ^ { 2 }$ , $k \in \mathbb { Z } _ { + }$ . Let $\Gamma _ { \theta } \in ] 0 , \infty [$ be such that $\gamma ( x ) \ \leq \ \Gamma _ { \theta } \ \forall x \ \in \ \theta \mathbb { B } _ { n }$ , which exists due to the compactness of $\theta \mathbb { B } _ { n }$ and the continuity of $\gamma$ over $\theta { \mathbb { B } } _ { n }$ . Consequently, we have $\begin{array} { r } { \mathcal { V } ( x ) = \sum _ { k = 0 } ^ { \infty } \gamma ( \varphi _ { x } ( k ) ) \alpha ( \varphi _ { x } ( k ) ) = } \end{array}$ $\begin{array} { r l } { \sum _ { k = 0 } ^ { N - 1 } \gamma ( \varphi _ { x } ( k ) ) \alpha ( \varphi _ { x } ( k ) ) + \sum _ { k = N } ^ { \infty } \gamma ( \varphi _ { x } ( k ) ) \alpha ( \varphi _ { x } ( k ) ) } & { { } \le } \end{array}$ k=0 $\begin{array} { r l r } { \sum _ { k = 0 } ^ { N - 1 } \gamma ( \varphi _ { x } ( k ) ) \alpha ( \varphi _ { x } ( k ) ) } & { { } + } & { \alpha _ { M } \Gamma _ { \theta } \theta ^ { 2 } \sum _ { k = 0 } ^ { \infty } \lambda ^ { 2 k } } \end{array}$ k=0 ≤ $\begin{array} { r } { \sum _ { k = 0 } ^ { N - 1 } \gamma ( \varphi _ { x } ( k ) ) \alpha ( \varphi _ { x } ( k ) ) + \alpha _ { M } \Gamma _ { \theta } \theta ^ { 2 } \frac { 1 } { 1 - \lambda ^ { 2 } } < \infty \mathrm { . } } \end{array}$ Hence, $x \in \mathbb { V } _ { \infty }$ , and that completes the proof. 1 # VII. PROPERTIES OF THE VALUE FUNCTIONS In this section, we state some important properties for the functions $\nu$ and $\mathcal { W }$ . Lemma 3: The functions $\nu$ and $\boldsymbol { \mathcal { W } }$ are positive definite. This is an immediate consequence of the definitions. Theorem 4: $\nu$ is continuous over $\mathcal { D } _ { 0 } ^ { \mathcal { X } }$ Proof: Recall the definitions of $M , r , \lambda$ in Assumption 1. Let $\theta \in ] 0 , \infty [$ be such that $\theta \mathbb { B } _ { n } \subseteq \mathcal { X }$ , which exists due to the openness of $\chi$ and the fact that $0 _ { n } \in \mathcal { X }$ . Let $x _ { 0 } \in \mathcal { D } _ { 0 } ^ { \mathcal { X } }$ and $\varepsilon > 0$ be arbitrary, where we assume without loss of generality that $\varepsilon \leq \operatorname* { m i n } \{ \theta / M , r \}$ . Then there exists $N \in \mathbb { Z } _ { + }$ such that $\varphi _ { x _ { 0 } } ( N ) \ \in \ { \frac { \varepsilon } { 2 } } \mathbb { B } _ { n }$ , where the exponential stability indicates that $\lvert \lvert \varphi _ { x _ { 0 } } ( N ^ { \prime } + k ) \rvert \rvert \leq M \varepsilon \lambda ^ { k } \leq \theta , \ k \in \mathbb { Z }$ . Let $\Gamma _ { \theta } \in ] 0 , \infty [$ be such that $\gamma ( x ) \leq \Gamma _ { \theta } \ \forall x \in \theta \mathbb { B } _ { n }$ , which exists due to the compactness of $\theta \mathbb { B } _ { n }$ and the continuity of $\gamma$ over $\theta { \mathbb { B } } _ { n }$ . Consequently, we have $\begin{array} { r } { \sum _ { k = N } ^ { \infty } \gamma ( \varphi _ { x _ { 0 } } ( k ) ) \alpha ( \varphi _ { x _ { 0 } } ( k ) ) \leq } \end{array}$ $\begin{array} { r } { \sum _ { k = 0 } ^ { \infty } \Gamma _ { \theta } \alpha _ { M } \| \varphi _ { x _ { 0 } } ( N + k ) \| ^ { 2 } \ \leq \ \Gamma _ { \theta } \alpha _ { M } \frac { M ^ { 2 } \varepsilon ^ { 2 } } { 1 - \lambda ^ { 2 } } } \end{array}$ . Let $\delta \ > \ 0$ be such that $x _ { 0 } + \delta \mathbb { B } _ { n } \subset \mathcal { D } _ { 0 } ^ { \mathcal { X } }$ and, for all $x \in x _ { 0 } + \delta \mathbb { B } _ { n }$ , $\begin{array} { r l r } { | \sum _ { k = 0 } ^ { N - 1 } \gamma ( \varphi _ { x _ { 0 } } ( k ) ) \alpha ( \varphi _ { x _ { 0 } } ( k ) ) - \gamma ( \varphi _ { x } ( k ) ) \alpha ( \varphi _ { x } ( k ) ) | } & { < } & { \varepsilon , } \end{array}$ and $\| \varphi _ { x _ { 0 } } ( N ) - \varphi _ { x } ( N ) \| ~ \le ~ \frac { \varepsilon } { 2 }$ . Such $\delta$ exists due to the openness of $\mathcal { D } _ { 0 } ^ { \mathcal { X } }$ , the continuity of $f$ (and consequently the continuity of $f ^ { i }$ , $i \in [ 1 ; N ] )$ , the continuity of $\alpha$ and the continuity of $\gamma$ over $\mathcal { X }$ . Consequently, for all $x \in x _ { 0 } + \delta \mathbb { B } _ { n }$ , $\| \varphi _ { x } ( N ) \| \ \leq \ \| \varphi _ { x _ { 0 } } ( N ) - \varphi _ { x } ( N ) \| + \| \varphi _ { x _ { 0 } } ( N ) \| \ \leq \ \varepsilon$ . The exponential stability indicates that, for all $x \in x _ { 0 } + \delta \mathbb { B } _ { n }$ , $\begin{array} { r c l c r c l } { \| \varphi _ { x } ( N + k ) \| } & { \leq } & { M \varepsilon \lambda ^ { k } } & { \leq } & { \theta , } & { k } & { \in } & { \mathbb { Z } _ { + } } \end{array}$ . Therefore, $\begin{array} { r } { \sum _ { k = N } ^ { \infty } \gamma ( \varphi _ { x } ( k ) ) \alpha ( \varphi _ { x } ( k ) ) \leq \sum _ { k = 0 } ^ { \infty } \Gamma _ { \theta } \alpha _ { M } \| \varphi _ { x _ { 0 } } ( N + k ) \| ^ { 2 } \leq } \end{array}$ $\begin{array} { r } { \Gamma _ { \theta } \alpha _ { M } \frac { M ^ { 2 } \varepsilon ^ { 2 } } { 1 - \lambda ^ { 2 } } \forall x \ \in \ x _ { 0 } \ + \ \delta \mathbb { B } _ { n } } \end{array}$ . Finally we have, for all $x \in x _ { 0 } + \delta \mathbb { B } _ { n }$ $( \mathcal { V } ( x )$ $x \in x _ { 0 } + \delta \mathbb { B } _ { n } )$ $\begin{array} { r l r l r } { | \mathcal { V } ( x _ { 0 } ) } & { - } & { \mathcal { V } ( x ) | } & { \ \le \ } & { | \sum _ { k = 0 } ^ { N - 1 } \gamma ( \varphi _ { x _ { 0 } } ( k ) ) \alpha ( \varphi _ { x _ { 0 } } ( k ) ) } & { - } \end{array}$ γ $\begin{array} { r l r } { \mathrm { \langle } ( \varphi _ { x } ( k ) ) \alpha ( \varphi _ { x } ( k ) ) | } & { { } + } & { \sum _ { k = N } ^ { \infty } \gamma ( \varphi _ { x _ { 0 } } ( k ) ) \alpha ( \varphi _ { x } } \end{array}$ 0 (k)) + $\begin{array} { r } { \sum _ { k = N } ^ { \infty } \gamma ( \varphi _ { x } ( k ) ) \alpha ( \varphi _ { x } ( k ) ) \leq \varepsilon + \Gamma _ { \theta } \alpha _ { M } \frac { M ^ { 2 } \varepsilon ^ { 2 } } { 1 - \lambda ^ { 2 } } + \Gamma _ { \theta } \alpha _ { M } \frac { M ^ { 2 } \varepsilon ^ { 2 } } { 1 - \lambda ^ { 2 } } } \end{array}$ As $\varepsilon$ is arbitrary, the proof is complete. # Theorem 5: $\mathcal { V } ( x _ { k } ) \infty$ whenever $x _ { k } \to x \in \partial \mathcal { D } _ { 0 } ^ { \mathcal { X } }$ , Proof: Without loss of generality, consider a sequence $\{ x _ { k } \} \ \subseteq \mathcal { D } _ { 0 } ^ { \mathcal { X } }$ , where $x _ { k } ~ \to ~ x ~ \in ~ \partial { \cal D } _ { 0 } ^ { \chi }$ . Let $\theta ~ \in ~ ] 0 , \infty [$ be such that $\theta \ : < \ : r$ and $\theta { \mathbb { B } } _ { n } \subseteq { \mathcal { X } }$ . Let $\tilde { r } ~ \in ~ ] 0 , \theta / M ^ { 2 } [$ [. For each $k \in \mathbb { Z } _ { + }$ , let $T _ { k } \in \mathbb { Z } _ { + }$ be the first time instance such that $\varphi _ { x _ { k } } ( T _ { k } ) \in \tilde { r } \mathbb { B } _ { n }$ . If the sequence $\left\{ { { T } _ { k } } \right\}$ diverges to $\infty$ , then we have $\begin{array} { r } { \mathcal { V } ( x _ { k } ) \ge \sum _ { j = 0 } ^ { T _ { k } - 1 } \gamma ( \varphi _ { x _ { k } } ( \dot { j } ) ) \alpha ( \varphi _ { x _ { k } } ( \bar { j } ) ) \ge } \end{array}$ $\underline { { \gamma } } \alpha _ { m } \tilde { r } ^ { 2 } ( T _ { k } - 1 ) \infty$ as $k \infty$ , Assume that $\left\{ \boldsymbol { T } _ { k } \right\}$ does not diverge to $\infty$ , then there exists a bounded subsequence, again denoted $\left\{ { { T } _ { k } } \right\}$ , with an upper bound $T \in \mathbb { Z } _ { + }$ such that $T _ { k } ~ \le ~ T , ~ k ~ \in ~ \mathbb { Z } _ { + }$ . It follows, as $\tilde { r } \le \theta / M ^ { 2 } \le \$ $\theta \ : < \ : r$ , that $\varphi _ { x _ { k } } ( T _ { k } ) \in \tilde { r } \mathbb { B } _ { n } \subseteq r \mathbb { B } _ { n } , k \in \mathbb { Z } _ { - }$ . Therefore, $\| \varphi _ { x _ { k } } ( T _ { k } + j ) \| ~ \le ~ M \lambda ^ { j } \tilde { r } ~ \le ~ \theta / M$ , $j , k \in \ \mathbb { Z } _ { + }$ . Hence, $\varphi _ { x _ { k } } ( T ) \in ( \theta / M ) \mathbb { B } _ { n } \subseteq r \mathbb { B } _ { n } .$ , $k \in \mathbb { Z } _ { + }$ , implying, using the continuity of $f ^ { T } ( \cdot )$ , $\varphi _ { x } ( T ) \in ( \theta / M ) \mathbb { B } _ { n } \subseteq r \mathbb { B } _ { n }$ . Therefore, $\| \varphi _ { x } ( T + j ) \| \le M ( \theta / M ) \lambda ^ { j } = \theta \lambda ^ { j } \le \theta$ , $j \in \mathbb { Z } _ { + }$ . This implies that $\varphi _ { x } ( T + j ) \ \in \mathcal X$ , $j ~ \in ~ \mathbb { Z } _ { + }$ , and $\varphi _ { x } ~ \to ~ 0 _ { n }$ exponentially. If $\varphi _ { x } ( j ) \in \mathcal { X } \ \forall j \in [ 0 ; T - 1 ]$ , it follows that $\nu ( x ) < \infty$ , implying $x$ in an interior point of $\mathcal { D } _ { 0 } ^ { \chi }$ , which yields a contradiction. Now, assume $\varphi _ { x } ( j ) = y \in \mathbb { R } ^ { n } \setminus \mathcal { X }$ for some $j ~ \in ~ [ 0 ; T ~ - ~ 1 ]$ . Then, using the continuity of $f ^ { j }$ , it follows that $\varphi _ { x _ { k } } ( j ) \ y$ as $k \infty$ . This yields $\operatorname* { l i m } _ { k \to \infty } \gamma ( \varphi _ { x _ { k } } ( j ) ) = \infty$ and consequently $\mathcal { V } ( x _ { k } ) \infty$ as $k \infty$ . Corollary $\boldsymbol { { l } } \colon \mathcal { W }$ is continuous over $\mathbb { R } ^ { n }$ . # VIII. CHARACTERIZING THE VALUE FUNCTIONS: LYAPUNOV AND ZUBOV EQUATIONS In this section, we derive the Lyapunov and Zubov equations corresponding to the functions $\nu$ and $\mathcal { W }$ , respectively. Theorem 6: For all $x ~ \in ~ \mathbb { R } ^ { n } , ~ \mathcal { V }$ satisfies the maximal Lyapunov equation (w.r.t. to the function $v$ ) $$ v ( x ) = \gamma ( x ) \alpha ( x ) + v ( f ( x ) ) . $$ Proof: Given $ { \boldsymbol { { x } } } ^ { \mathrm { ~ ~ } } \in { \mathbb { R } } ^ { n }$ and the definition of $\nu$ in (2), we have $\begin{array} { r l r } { \mathcal { V } ( x ) } & { = } & { \sum _ { k = 0 } ^ { \infty } \gamma ( \varphi _ { x } ( k ) ) \alpha ( \varphi _ { x } ( k ) ) = } \end{array}$ $\begin{array} { r l r } { \gamma ( x ) \alpha ( x ) { \bf \Phi } + \ \sum _ { k = 1 } ^ { \infty } \gamma ( \varphi _ { x } ( k ) ) \alpha ( \varphi _ { x } ( k ) ) } & { { } = } & { \gamma ( x ) \alpha ( x ) { \bf \Phi } + { \bf \Phi } } \end{array}$ $\begin{array} { r } { \sum _ { k = 0 } ^ { \infty } \gamma \big ( \varphi _ { f ( x ) } ( k ) \big ) \alpha \big ( \varphi _ { f ( x ) } ( k ) \big ) = \gamma ( x ) \alpha ( x ) + \mathcal { V } ( f ( x ) ) . } \end{array}$ Note that the above decomposition is valid even if $\mathcal { V } ( x )$ is infinite. Theorem 7: For all $\boldsymbol { x } \in \mathbb { R } ^ { n }$ , $\mathcal { W }$ satisfies the Zubov equation (w.r.t. to the function $w$ ) $$ w ( x ) - w ( f ( x ) ) = \xi ( x ) ( 1 - w ( f ( x ) ) ) , $$ where $$ \xi ( x ) : = 1 - \exp ( - \gamma ( x ) \alpha ( x ) ) . $$ Proof: Using Theorem 6, we have, for any $\textit { x } \in$ $\mathbb { R } ^ { n }$ , $\mathcal { W } ( x ) ~ = ~ 1 - \exp ( - \mathcal { V } ( x ) ) ~ = ~ 1 - \exp ( - \mathcal { V } ( f ( x ) ) ~ - ~$ - $\gamma ( x ) \alpha ( x ) ) = 1 - \exp ( - \gamma ( x ) \alpha ( x ) ) ( 1 - \mathcal { W } ( f ( x ) ) )$ ), implying $\mathcal { W } ( x ) - \mathcal { W } ( f ( x ) ) = 1 - \mathcal { W } ( f ( x ) ) - \exp ( - \gamma ( x ) \alpha ( x ) ) ( 1 -$ $\mathcal { W } ( f ( x ) ) ) = \xi ( x ) ( 1 - \mathcal { W } ( f ( x ) ) )$ . Theorem 8: If $w : D \subseteq \mathcal { X } \to \mathbb { R }$ satisfies equation (5) over $D$ , then for all $x \in D$ , $w$ satisfies the Zubov equation $$ w ( x ) - w ( f ( x ) ) = \beta ( x ) ( 1 - w ( x ) ) , $$ where $$ \beta ( x ) : = \exp ( \gamma ( x ) \alpha ( x ) ) - 1 . $$ Proof: When $x \ \in \ D \ \subseteq \ \chi , \ \gamma ( x ) \ < \ \infty , \ \mathrm { t h e } 1$ efore, using equation (5), $1 - w ( f ( x ) ) = \exp ( \gamma ( x ) \alpha ( x ) ) ( 1 - w ( x ) )$ . Hence, w( $x ) \ : - \ : w ( f ( x ) ) = \exp ( \gamma ( x ) \alpha ( x ) ) ( 1 - w ( x ) ) .$ $1 + w ( x ) \ = \ \exp ( \gamma ( x ) \alpha ( x ) ) ( 1 - w ( x ) ) - ( 1 - w ( x ) ) \ =$ $\beta ( x ) ( 1 - w ( x ) )$ . # A. Lyapunov and Zubov equations: Uniqueness results We have shown that the value functions $\nu$ and $\boldsymbol { \mathcal { W } }$ are solutions to the equations (4) and (5), respectively. In the next section, we show that the solutions to these equations are unique with respect to functions the are continuous at the origin. We start with the following technical result: Lemma 9: Assume that $w \colon { \mathcal { D } } _ { 0 } ^ { \mathcal { X } } \mathbb { R }$ is continuous at the origin, with $w ( 0 _ { n } ) = 0$ and satisfying equation (5) over $\mathcal { D } _ { 0 } ^ { \mathcal { X } }$ . Then $w ( x ) < 1$ for all $x \in \mathcal { D } _ { 0 } ^ { \mathcal { X } }$ . Proof: Using Theorem 8, $w$ satisfies equation (7) over $\mathcal { D } _ { 0 } ^ { \mathcal { X } }$ . Assume that for some $x \in \mathcal { D } _ { 0 } ^ { \mathcal { X } }$ , $w ( x ) \geq 1$ . We have $\varphi _ { x } ( k ) \in \mathcal { D } _ { 0 } ^ { \chi } \forall k \in \mathbb { Z } _ { + }$ and $\begin{array} { r } { \operatorname* { l i m } _ { k \to \infty } \varphi _ { x } ( k ) \ = ~ 0 _ { n } } \end{array}$ . Using equation (7), we have $w ( x ) - w ( \varphi _ { x } ( 1 ) ) = \beta ( x ) ( 1 - w ( x ) ) \leq 0$ Hence, $w ( \varphi _ { x } ( 1 ) ) \geq w ( x ) \geq 1$ , and by induction, we have $w ( \varphi _ { x } ( k + 1 ) ) \geq w ( \varphi _ { x } ( k ) ) \geq 1$ for all $k \in \mathbb { Z } _ { + }$ . Hence, $\begin{array} { r } { \operatorname* { l i m } _ { k \to \infty } w ( \varphi _ { x } ( k ) ) ~ \neq ~ 0 } \end{array}$ , which yields a contradiction as $\mathrm { l i m } _ { k \to \infty } w ( \varphi _ { x } ( k ) )$ must be zero due to the continuity of $w$ at the origin, with $\cdot$ and $\begin{array} { r } { \operatorname* { l i m } _ { k \to \infty } \varphi _ { x } ( k ) = 0 _ { n } } \end{array}$ , and that completes the proof. Theorem $I O$ : Let $\mathbf { v } \colon \mathcal { D } _ { 0 } ^ { \mathcal { X } } \to \mathbb { R }$ be a function continuous at the origin and satisfying $\mathbf { v } ( 0 _ { n } ) = 0$ . Assume that $\mathbf { v }$ satisfies equation (4) over $\mathcal { D } _ { 0 } ^ { \mathcal { X } }$ . Then $\mathbf { v } ( x ) = \mathcal { V } ( x ) \ \forall x \in \mathcal { D } _ { 0 } ^ { \mathcal { X } }$ . Proof: Let $\boldsymbol { x } ~ \in ~ \mathcal { D } _ { \boldsymbol { 0 } } ^ { \mathcal { X } }$ . As $\mathcal { D } _ { 0 } ^ { \mathcal { X } }$ is invariant, we have $\varphi _ { x } ( k ) \in \mathcal { D } _ { 0 } ^ { \mathcal { X } } \ \forall k \in \mathbb { Z } _ { + }$ . Let $k \in \mathbb { N }$ . We consequently have $\begin{array} { r } { \mathbf { v } ( \underline { { x } } ) - \mathbf { v } ( \varphi _ { x } ( k ) ) \ = \ \sum _ { j = 0 } ^ { k - 1 } ( \mathbf { v } ( \varphi _ { x } ( j ) ) - \mathbf { v } ( \varphi _ { x } ( j + 1 ) ) ) \ = \ } \end{array}$ $\begin{array} { r } { \sum _ { j = 0 } ^ { k - 1 } \gamma ( \varphi _ { x } ( j ) ) \alpha ( \varphi _ { x } ( j ) ) } \end{array}$ . As $\mathbf { v }$ is continuous at the origin with $\mathbf { v } ( 0 _ { n } ) = 0$ , $\varphi _ { x } ( k ) \to 0 _ { n }$ as $k \infty$ , and $\mathcal { V } ( x ) < \infty$ $\begin{array} { r l } { { ( \sum _ { j = 0 } ^ { k - 1 } \gamma ( \varphi _ { x } ( j ) ) \alpha ( \varphi _ { x } ( j ) ) } } \end{array}$ converges), then taking the limit as $\bar { k } \to \infty$ in the both sides of the above equation results in $\begin{array} { r } { \mathbf { v } ( x ) = \sum _ { j = 0 } ^ { \infty } \gamma ( \varphi _ { x } ( j ) ) \boldsymbol { \alpha } ( \varphi _ { x } ( j ) ) = \mathcal { V } ( x ) } \end{array}$ . 1 Theorem $\mathit { 1 1 }$ : Let $\mathbf { w } \colon { \mathbb { R } } ^ { n } \ \to \ { \mathbb { R } }$ be a bounded function, continuous at the origin, with $\mathbf { w } ( 0 _ { n } ) ~ = ~ 0$ , and satisfying equation (5) over $\mathbb { R } ^ { n }$ . Then $\mathbf { w } = { \boldsymbol { \mathcal { W } } } $ . Proof: First note that the difference of two solutions, $\mathbf { w } _ { 1 }$ and $\mathbf { w } _ { 2 }$ , to equation (5) satisfies, for $x \in \mathbb { R } ^ { n }$ , $$ \begin{array} { r } { \mathbf { w } _ { 1 } ( x ) - \mathbf { w } _ { 2 } ( x ) = ( 1 - \xi ( x ) ) ( \mathbf { w } _ { 1 } ( f ( x ) ) - \mathbf { w } _ { 2 } ( f ( x ) ) ) . } \end{array} $$ When $x \in \mathbb { R } ^ { n } \setminus { \mathcal { X } }$ , we have $\xi ( x ) = 1$ , implying $\mathbf { w } ( x ) = 1 =$ $\mathcal { W } ( x )$ . Over $\mathcal { D } _ { 0 } ^ { \mathcal { X } }$ $^ { \ v { x } } _ { ) } , \mathbf { v } ( \cdot ) = - \ln ( 1 - \mathbf { w } ( \cdot ) )$ is well-defined due to Lemma 9, satisfying equation (4). Then, using Theorem 10, it follows that $\mathbf { v } ( x ) = \mathcal { V } ( x )$ , hence $\mathbf { w } ( x ) = \mathcal { W } ( x )$ for all $x \in \mathcal { D } _ { 0 } ^ { \mathcal { X } }$ . Let $x \in \mathcal { X }$ be such that $\varphi _ { x } ( j ) \in \mathbb { R } ^ { n } \setminus \mathcal { X }$ or $\varphi _ { x } ( j ) \in \mathcal { D } _ { 0 } ^ { \mathcal { X } }$ for some $j \in \mathbb { N }$ and $\varphi _ { x } ( k ) \in \mathcal { X } ~ \forall k \in [ 0 ; j - 1 ]$ . Then, we have, using equation (9), $\begin{array} { r } { \mathbf { w } ( \varphi _ { x } ( j - 1 ) ) - \mathcal { W } ( \varphi _ { x } ( j - 1 ) ) = } \end{array}$ $( 1 - \xi ( \varphi _ { x } ( j - 1 ) ) ) ( \mathbf { w } ( \varphi _ { x } ( j ) ) - \mathcal { W } ( \varphi _ { x } ( j ) ) ) = 0$ , and by an inductive argument, we have $\mathbf { w } ( \varphi _ { x } ( k ) ) - \mathcal { W } ( \varphi _ { x } ( k ) ) =$ $0 \forall k \in [ 0 ; j ]$ , implying $\mathbf { w } ( x ) = \mathcal { W } ( x )$ . Now, let $x \in \mathcal { X }$ be such that $\varphi _ { x } ( k ) \in \mathcal { X }$ for all $k \in \mathbb { Z } _ { + }$ and $\begin{array} { r } { \operatorname* { l i m } _ { k \to \infty } \varphi _ { x } ( k ) \neq 0 _ { n } } \end{array}$ . Obviously, $\varphi _ { x } ( k ) \notin \mathcal { D } _ { 0 } ^ { \mathcal { X } }$ for all $k \in \mathbb { Z } _ { + }$ then it follows, using the openness of $\mathcal { D } _ { 0 } ^ { \mathcal { X } }$ that there exist $\theta >$ 0 such that $\| \varphi _ { x } ( k ) \| \geq \theta \forall k \in \mathbb { Z } _ { + }$ . Assume $\mathbf { w } ( x ) \neq { \mathcal { W } } ( x )$ . Note the difference of two solutions, $\mathbf { w } _ { 1 }$ and $\mathbf { w } _ { 2 }$ , to equation (5) over $\chi$ (which are also solutions to (7) over $\chi$ using Theorem 8) satisfies, for $x \in \mathcal { X }$ , $$ \mathbf { w } _ { 1 } ( f ( x ) ) - \mathbf { w } _ { 2 } ( f ( x ) ) = ( 1 + \beta ( x ) ) ( \mathbf { w } _ { 1 } ( x ) - \mathbf { w } _ { 2 } ( x ) ) . $$ It then follows that, for all $j \in \mathbb { N }$ , $\mathbf { w } ( \varphi _ { x } ( j ) ) - \mathcal { W } ( \varphi _ { x } ( j ) ) =$ $\begin{array} { r } { \prod _ { k = 0 } ^ { j - 1 } ( 1 + \beta ( \varphi _ { x } ( k ) ) ) ( \mathbf { w } ( x ) - \mathcal { W } ( x ) ) } \end{array}$ . For all $k \in [ 0 ; j - 1 ]$ , we have $1 ~ + ~ \beta ( \varphi _ { x } ( k ) ) ~ = ~ \exp ( \gamma ( \varphi _ { x } ( k ) ) \alpha ( \varphi _ { x } ( k ) ) ) ~ \geq$ $\exp ( \underline { { \gamma } } \alpha _ { m } \| \varphi _ { x } ( k ) \| )$ $\begin{array} { r l r } { \rho _ { x } ( k ) \| ) } & { { } \quad } & { \geq \quad } & { { } \exp ( \gamma \alpha _ { m } \theta ^ { 2 } ) } \end{array}$ . Hence, $| \mathbf { w } ( \varphi _ { x } ( j ) ) - { \mathcal { W } } ( \varphi _ { x } ( j ) ) | \geq \exp ( j \underline { { \gamma } } \alpha _ { m } \theta ^ { 2 } ) | \mathbf { w } ( x ) - { \mathcal { W } } ( x ) |$ . As $\begin{array} { r } { \operatorname* { l i m } _ { j \to \infty } \exp ( j \underline { { \gamma } } \alpha _ { m } \theta ^ { 2 } ) = \infty } \end{array}$ . It then follows that for each $M > 0$ , there exists $y \in \mathcal { X }$ (which corresponds to a point in the trajectory $\varphi _ { x }$ ) such that $\mathbf { w } ( y ) > M + \mathcal { W } ( y ) \geq M - 1$ or $\mathbf { w } ( y ) < - M + \mathcal { W } ( y ) \leq - M + 1 = - ( M - 1 )$ , which is equivalent to $| w ( y ) | > M - 1$ . Hence, w is unbounded over $\chi$ , a contradiction. # IX. PHYSICS-INFORMED NEURAL SOLUTION Let $W _ { N } ( \cdot ; \theta ) \colon \mathbb { R } ^ { n } \to \mathbb { R }$ be a fully-connected feedforward neural network with parameters denoted by $\theta$ . We now train $W _ { N }$ to solve Zubov’s equation (5) on a compact set $\mathbb { X } \subset \mathbb { R } ^ { n }$ , with $\chi \cap \mathbb { X } \neq \emptyset$ , by minimizing the following loss function: $$ \begin{array} { c } { \displaystyle \mathrm { L o s s } ( \theta ) = \displaystyle \frac { 1 } { N _ { c } } \sum _ { i = 1 } ^ { N _ { c } } ( W _ { N } ( x _ { i } ; \theta ) - W _ { N } ( f ( x _ { i } ) ; \theta ) - } \\ { \displaystyle \xi ( x _ { i } ) ( 1 - W _ { N } ( f ( x _ { i } ) ; \theta ) ) ) ^ { 2 } } \\ { \displaystyle + \lambda _ { d } \frac { 1 } { N _ { d } } \sum _ { i = 1 } ^ { N _ { d } } ( W _ { N } ( z _ { i } ; \theta ) - \hat { W } ( z _ { i } ) ) ^ { 2 } , } \end{array} $$ where $\lambda _ { b } \ > \ 0$ and $\lambda _ { d } ~ > ~ 0$ are user-defined weighting parameters. Here, the points $\{ x _ { i } \} _ { i = 1 } ^ { N _ { c } } \subset \mathbb { X }$ are the (interior) collocation points to compute the residual error of (5). To ensure an accurate neural network solution, we add the second term (refer to the data term) to guide the training, using the (approximate) ground truth values for $w$ at a set of points $\{ \bar { z _ { i } } \} _ { i = 1 } ^ { N _ { d } } \subset \mathbb { X }$ . For each $z _ { i }$ , given a fixed (sufficiently large) $M \in \mathbb { N }$ , we compute $\begin{array} { r } { \mathcal V ( \breve { z } _ { i } ) \approx \sum _ { k = 0 } ^ { M } \gamma ( \varphi _ { x } ( k ) ) \alpha ( \mathbf { \bar { \varphi } } _ { x } ( k ) ) } \end{array}$ . Note that as along as $\mathcal { V } ( z _ { i } )$ is finite, it is within the safe region. Next, we choose a large enough constant $C _ { m a x }$ such that $C _ { m a x } >$ $\mathcal { V } ( z _ { i } )$ , for all sampled $z _ { i }$ whose trajectory does not leave the safe region and converges towards to the origin. Considering the fact that $1 - \exp ( - 4 0 ) \approx 1$ , we introduce a scaling factor $\begin{array} { r } { \mu = \frac { 4 0 } { { C _ { m a x } } } } \end{array}$ and define $\hat { \mathcal { V } } = \mu \mathcal { V }$ , which corresponds to the value function given by (2) with $\alpha$ replaced by $\mu \alpha$ . We then let $\hat { W } ( z _ { i } ) = 1 \bar { - } \exp ( - \hat { \mathcal { V } } ( z _ { i } ) )$ . Consequently, the approximate ground truth $\hat { W }$ takes values from 0 to 1, and $\hat { W } ( z _ { i } ) < 1$ when it is in the safe region. Additionally, we set $\hat { W } ( z _ { i } ) = 1$ in the following two scenarios: 1) if the trajectory $\varphi _ { z _ { i } } ( k )$ that enters the unsafe region which results in $\mathcal { V } ( z _ { i } ) > C _ { m a x }$ 2) $\varphi _ { z _ { i } } ( k )$ diverges, which can be numerically checked by examining the values of the states, that is, $\| \varphi _ { z _ { i } } ( k ) \| > C _ { X }$ , where $C _ { X }$ is a user-defined threshold. We also include the condition $\hat { W } ( 0 ) = 0$ in the data loss. # X. VERIFIED SAFE ROAS # A. Safe ROAs with quadratic Lyapunov functions Herein, we adopt the approach in [6] to find ellipsoidal safe ROAs using quadratic Lyapunov functions. We start with local safe ROA by linearization, where we assume that $f$ is twice-continuously differentiable. Let $A = D f ( 0 )$ (assume $A$ to be a Schur matrix) and rewrite $f$ as $f ( x ) = A x +$ $h ( x )$ , $\boldsymbol { x } \in \mathbb { R } ^ { n }$ , where $h ( \cdot ) = f ( \cdot ) - A ( \cdot )$ . Let $Q \in S _ { + + } ^ { n }$ be given, and $\textit { P } \in \ S _ { + + } ^ { n }$ be the solution to the discretetime algebraic Lyapunov equation $A ^ { \mathsf { T } } P A - P = - Q$ . Define the quadratic Lyapunov function $V _ { P } ( x ) ~ = ~ x ^ { \intercal } P x$ , and let $V _ { P } ^ { + } ( x ) : = V _ { P } ( f ( x ) ) - V _ { P } ( x )$ . Define the positive parameter $d : = \underline { { \lambda } } ( Q ) - \varepsilon > 0$ , for some sufficiently small $\varepsilon > 0$ . Let $\boldsymbol { B } \subseteq$ $\chi$ be a hyper-rectangle with vector radius $R \boldsymbol { B } \in \mathbb { R } _ { + } ^ { n } \ \backslash \ \{ 0 _ { n } \}$ , i.e., $\boldsymbol { B } = \left[ \left[ - \boldsymbol { R } \boldsymbol { B } , \boldsymbol { R } \boldsymbol { B } \right] \right]$ (such a hyper-rectangle exists due to the openneJss of $\mathcal { X }$ aKnd the fact that $0 \in \mathcal { X }$ ). We can find a vector $\eta _ { B } \in \mathbb { R } _ { + } ^ { n }$ (by bounding the Hessian of $f$ over $\boldsymbol { B }$ , e.g, using interval arithmetic) such that |h(x)| ≤ ∥x2 η $\begin{array} { r } { | h ( x ) | \leq \frac { \| x \| ^ { 2 } } { 2 } \eta _ { B } , \ x \in B . } \end{array}$ Define $c _ { 1 } = \operatorname* { m i n } \{ a _ { 1 } , a _ { 2 } \}$ , where $$ a _ { 1 } : = \frac { ( - \beta + \sqrt { \beta ^ { 2 } + 4 \alpha d } ) ^ { 2 } } { ( 2 \alpha ) ^ { 2 } } , $$ $\cdot$ and $\beta : = \| P ^ { \frac { 1 } { 2 } } | \eta _ { B } \| \| P ^ { \frac { 1 } { 2 } } A P ^ { - \frac { 1 } { 2 } } \|$ , and $$ a _ { 2 } : = \operatorname* { m i n } _ { i \in [ 1 ; n ] } \frac { \mathcal { R } _ { B , i } ^ { 2 } } { P _ { i , i } ^ { - 1 } } . $$ Then, it can be shown that $[ 2 8 ] ^ { 1 }$ for all $\boldsymbol { x } \in \mathbb { R } ^ { n }$ , $$ V _ { P } ( x ) < c _ { 1 } \Rightarrow x \in \mathcal { B } \subseteq \mathcal { X } \wedge V _ { P } ^ { + } ( x ) \leq - \varepsilon \| x \| ^ { 2 } . $$ It then follows that: Proposition 12: The set $\mathbf { V } _ { c _ { 1 } } : = \{ x \in \mathbb { R } ^ { n } : V _ { P } ( x ) \leq c _ { 1 } \}$ is a safe ROA of (1). Suppose that we have verified a safe ROA around the origin, $\mathbf { V } _ { c _ { 1 } }$ , for some $c _ { 1 } > 0$ . We then can enlarge the safe ROA with the quadratic Lyapunov function by verifying the following inequality, for $\boldsymbol { x } \in \mathbb { R } ^ { n }$ : $$ c _ { 1 } \leq V _ { P } ( x ) \leq c _ { 2 } \Longrightarrow ( V _ { P } ^ { + } ( x ) \leq - \varepsilon ) \land ( g ( x ) < 1 ) , $$ where $c _ { 2 } > c _ { 1 }$ is a positive constant. Unless otherwise specified, $\varepsilon$ is some positive constant in the context. Proposition 13: Suppose that (12) holds. Then, the set $\mathbf { V } _ { c _ { 2 } } : = \{ x \in \mathbb { R } ^ { n } : V _ { P } ( x ) \leq c _ { 2 } \}$ is a safe ROA of (1). Proof: Define $\mathbf { V } _ { c _ { 2 } / c _ { 1 } } : = \{ x \in \mathbb { R } ^ { n } : c _ { 1 } \leq V _ { P } ( x ) \leq c _ { 2 } \}$ . If (12) is satisfied, then all solutions $\varphi _ { x } ( k )$ of (1) starting in $\mathbf { V } _ { c _ { 2 } / c _ { 1 } }$ cannot leave $\mathbf { V } _ { c _ { 2 } / c _ { 1 } }$ until entering $\mathbf { V } _ { c _ { 1 } }$ in finite time. By Proposition 12, if a solution starts in or enters $\mathbf { V } _ { c _ { 1 } }$ , it remains in $\mathbf { V } _ { c _ { 1 } }$ and converges to $0 _ { n }$ eventually. Moreover, we have $g ( x ) < 1$ on both $\mathbf { V } _ { c _ { 2 } / c _ { 1 } }$ and $\mathbf { V } _ { c _ { 1 } }$ . It follows that $\mathbf { V } _ { c _ { 2 } } = \mathbf { V } _ { c _ { 2 } / c _ { 1 } } \cup \mathbf { V } _ { c _ { 1 } } \subseteq \boldsymbol { \mathcal { X } }$ . Therefore, $\mathbf { V } _ { c _ { 2 } }$ is a safe ROA of (1). 1 Remark 2: Assume $\begin{array} { r l r } { \mathbb { X } } & { { } = } & { \left[ \underline { { \mathbf { x } } } , \overline { { \mathbf { x } } } \right] } \end{array}$ , i.e., $\mathbb { X }$ is a hyperrectangle. An upper bound on the values of $c$ such that the set $\{ x \ \in \ \mathbb { R } ^ { n } \ : \ x ^ { \top } P x \ \leq \ c \} \ \subseteq \ \mathbb { X }$ is needed as the condition (12) is practically verified over $\mathbb { X }$ , and not over $\mathbb { R } ^ { n }$ . A necessary and sufficient condition for inclusion is then $c \leq \mathrm { m i n } _ { i \in [ 1 ; n ] } ( \mathrm { m i n } \{ - \underline { { \mathbf { x } } } _ { i } , \overline { { \mathbf { x } } } _ { i } \} ) ^ { 2 } / P _ { i , i } ^ { - 1 }$ . # B. Enlarge the ROAs with the neural Lyapunov functions The safe ROA estimate can be further enlarged if we can find a neural network Lyapunov function $W _ { N } ( x )$ by minimizing (11) and verifying the following inequalities: $$ \begin{array} { r l r } & { } & { ( W _ { N } ( x ) \leq w _ { 1 } ) \wedge ( x \in \mathbb { X } ) \Longrightarrow V _ { P } ( x ) \leq c _ { 2 } , ~ } \\ & { } & { ( V _ { P } ( x ) \leq c _ { 2 } ) \wedge ( x \in \mathbb { X } ) \Longrightarrow W _ { N } ( x ) \leq w _ { 2 } , ~ } \\ & { } & { ( w _ { 1 } \leq W _ { N } ( x ) \leq w _ { 2 } ) \wedge ( x \in \mathbb { X } ) \Longrightarrow ~ } \\ & { } & { ( W _ { N } ^ { + } ( x ) \leq - \varepsilon ) \wedge ( g ( x ) < 1 ) \wedge ( f ( x ) \in \mathbb { X } ) . ~ } \end{array} $$ where $\varepsilon > 0$ , $w _ { 2 } > w _ { 1 } > 0$ , and $W _ { N } ^ { + } ( x ) : = W _ { N } ( f ( x ) ) -$ $W _ { N } ( x )$ . Proposition $^ { 1 4 }$ : Suppose that (13)-(15) and the conditions in Proposition 13 hold. Then the set $\mathbf { W } _ { w _ { 2 } } \ : = \ \{ x \in \mathbb { X }$ : $W _ { N } ( x ) \leq w _ { 2 } \}$ is a safe ROA of (1). Proof: Define $\mathbf { W } _ { w _ { 1 } } : = \{ x \in \mathbb { X } : W _ { N } ( x ) \leq w _ { 1 } \}$ and $\mathbf { W } _ { w _ { 2 } / w _ { 1 } } : = \{ x \in \mathbb { X } : w _ { 1 } < W _ { N } ( x ) \leq w _ { 2 } \}$ , where $\mathbf { W } _ { w _ { 2 } } =$ Ww1 ∪ Ww2/w1. 1This is an archived version of [6], which corrects errors in the bounds used in the derivations of the ellipsoidal ROAs. By conditions (13) and (14), and proposition (13), $\mathbf { W } _ { w _ { 1 } } \subseteq$ $\mathbf { V } _ { c _ { 2 } } \subseteq \mathbf { W } _ { w _ { 2 } } \subseteq \mathcal { X }$ . Moreover, proposition (13) implies that $$ x \in \mathbf { W } _ { w _ { 1 } } \Rightarrow \varphi _ { x } ( k ) \in \mathbf { V } _ { c _ { 2 } } \subseteq \mathbf { W } _ { w _ { 2 } } , \ k \in \mathbb { Z } _ { + } , $$ and $\begin{array} { r } { \operatorname* { l i m } _ { k \to \infty } \varphi _ { x } ( k ) = 0 _ { n } } \end{array}$ . Let $x \in \mathbf { W } _ { w _ { 2 } / w _ { 1 } }$ . Using condition (15), we have $f ( x ) \in \mathbb { X }$ and $W _ { N } ( f ( x ) ) \leq w _ { 2 } - \varepsilon \leq w _ { 2 }$ , i.e., $f ( x ) \in \mathbf { W } _ { w _ { 2 } }$ . This and equation (16) imply the invariance of $\mathbf { W } _ { w _ { 2 } }$ . It remains to show that, for $x \in \mathbf { W } _ { w _ { 2 } / w _ { 1 } }$ , there exists $N \in$ $\mathbb { Z } _ { + }$ such that $\varphi _ { x } ( N ) \in \mathbf { W } _ { w _ { 1 } }$ . By contradiction, assume that $\varphi _ { x } ( k ) \in \mathbf { W } _ { w _ { 2 } / w _ { 1 } }$ for all $k \in \mathbb { Z } _ { + }$ , then $w _ { 1 } < W _ { N } ( \varphi _ { x } ( k ) ) \leq$ $w _ { 2 }$ for all $k \in \mathbb { Z } _ { + }$ . However, by inductively using condition (15), $W _ { N } ( \varphi _ { x _ { 0 } } ( k ) ) \leq w _ { 2 } - k \epsilon , \ k \in \mathbb { Z } _ { + } .$ , which implies that for an integer $k > ( w _ { 2 } - w _ { 1 } ) / \varepsilon$ , $W _ { N } ( \varphi _ { x _ { 0 } } ( k ) ) \ < \ w _ { 1 }$ , a contradiction. 1 # XI. NUMERICAL EXAMPLES In this section, we present a set of numerical examples to illustrate the effectiveness of the proposed method. The training of the neural network Lyapunov functions is carried out using LyZNet [29], a Python toolbox for learning and verifying Lyapunov functions for nonlinear systems. For this work, we extended LyZNet to handle state-constrained, discrete-time nonlinear systems. For the verification part, the verification for the quadratic Lyapunov function is conducted with dReal [26] is conducted within the LyZNet framework as well, while the verification of the learned neural network Lyapunov function is with $\alpha , \beta$ -CROWN [14], [15], [17], [23]–[25], a GPU-accelerated neural network verifier. The verification with dReal was run on a $2 \mathrm { x }$ Intel Xeon $8 4 8 0 +$ 2.0 GHz CPU with 24 cores, while GPU-required training and verification with $\alpha , \beta$ -CROWN were performed on an NVIDIA Hopper H100 GPU. The code for the training and verification with dReal can be found at https://github. com/RuikunZhou/Safe_ROA_dt_LyZNet, while the verification with $_ { \alpha , \beta }$ -CROWN is available at https:// github.com/RuikunZhou/Safe-ROA-dt-system. # A. Reversed Van der Pol Oscillator Consider a discrete-version of the reversed Van der Pol oscillator, which is the Euler discretization of the corresponding continuous-time system: $$ \begin{array} { r } { f ( x _ { k } ) = \binom { x _ { 1 , k } - \Delta _ { t } x _ { 2 , k } } { x _ { 2 , k } + \Delta _ { t } \left( x _ { 1 , k } + ( x _ { 1 , k } ^ { 2 } - 1 ) x _ { 2 , k } ) \right) } , } \end{array} $$ where the step size $\Delta _ { t } ~ = ~ 0 . 1$ . We aim to estimate the safe DOA, $\mathcal { D } _ { 0 } ^ { \mathcal { X } }$ , where $\mathcal { X } \ = \ \mathbb { R } ^ { 2 } \ \backslash \ ( [ 1 \ 1 ] ^ { \top } + 1 / 4 \mathbb { B } _ { 2 } )$ . In other words, there is an obstacle of radius $1 / 4$ centered at $[ 1 1 ] ^ { \top }$ . The set on which training and verification take place is defined as $\mathbb { X } = [ [ - 2 . 5 ~ - 3 . 5 ] ^ { \intercal } , [ 2 . 5 ~ 3 . 5 ] ^ { \intercal } ]$ . For the neural network training, Jwe use a 2-hidden layer feeKdforward neural network with 30 neurons in each hidden layer and set $\alpha ( x ) = \Delta _ { t } \| x \| ^ { 2 }$ , $g _ { \mathcal { X } } ( x _ { k } ) = 1 + 1 / 4 - ( ( x _ { 1 , k } - 1 ) ^ { 2 } + ( x _ { 2 , k } -$ $1 ) ^ { 2 } ) / 0 . 2 5$ . The learned neural network Lyapunov function and its corresponding safe ROA can be found in Fig. 1. For the quadratic Lyapunov function, we choose $Q = I$ which results in $V _ { p } = 1 6 . 3 8 9 6 x _ { 1 , k } ^ { 2 } + 1 1 . 4 0 2 7 x _ { 2 , k } ^ { 2 } - 1 1 . 1 4 0 3 x _ { 1 , k } x _ { 2 , k } .$ , as demonstrated in Section X-A. It is worth mentioning that we are able to compute an upper bound of the level set of the quadratic Lyapunov function within $\chi$ using the method proposed in Remark 2 and then conduct bisection to determine $c _ { 2 }$ using the verification tools. In this case, the upper bound is 85.43. Consequently, we compute $c _ { 1 } = 1 . 7 8$ as illustrated in Section X-A, and $c _ { 2 } = 1 1 . 1 4$ is determined using LyZNet. Then, we verify the conditions in Section $\mathbf { X - B }$ for the neural network Lyapunov function. The time for verification with the two different verification tools for the neural network Lyapunov function is included in Table I. Fig. 1. Neural Lyapunov function and the corresponding safe ROA (blue curve on the right) for the reversed Van der Pol oscillator, where the red dashed line represents the safe ROA obtained by the quadratic Lyapunov function with $c _ { 2 } = 1 1 . 1 4$ . TABLE I COMPARISONS OF THE VERIFICATION TIME FOR THE NEURAL NETWORK LYAPUNOV FUNCTION IN SECTION X-B ROAs and the learned neural network Lyapunov function are illustrated in Fig. 2, while the verification time is reported in Table I. It is clear that in both two examples, the neural network Lyapunov functions obtained with the proposed method yield larger safe ROA estimates than the quadratic Lyapunov functions. Additionally, $\alpha , \beta$ -CROWN outperforms dReal in efficiency, even though it needs to be reinitialized in each iteration during bisection, which takes most of the reported time. In this subsection, we consider a discrete version of the two-dimensional two-machine power system studied in [30], [31]. Same as the first example, the discrete version is obtained through Euler discretization of its continuous-time version, which yields # B. Two-Machine Power System $$ \begin{array} { r } { f ( x _ { k } ) = \binom { x _ { 1 , k } + \Delta _ { t } x _ { 2 , k } } { x _ { 2 , k } - \Delta _ { t } \left( \frac { x _ { 2 , k } } { 2 } + \sin ( x _ { 1 , k } + \frac { \pi } { 3 } ) - \sin ( \frac { \pi } { 3 } ) \right) } , } \end{array} $$ Fig. 2. Neural Lyapunov function and the corresponding safe ROA (blue curve on the right) for the Two-Machine power system, where the red dashed line represents the safe ROA obtained by the quadratic Lyapunov function with $c _ { 2 } = 0 . 8 6$ . where $\Delta _ { t } ~ = ~ 0 . 1$ . Here, the safe set is set to be $\scriptstyle { \mathcal { X } } \ =$ $\begin{array} { r l } { \texttt { k } ^ { 2 } \setminus \left( ( [ 0 . 2 5 ~ 0 . 2 5 ] ^ { \top } + ( 1 / 8 ) \mathbb { B } _ { 2 } ) \cup ( [ 0 . 2 5 ~ - ~ 0 . 2 5 ] ^ { \top } + \right. } \end{array}$ $( 1 / 8 ) \mathbb { B } _ { 2 } ) ,$ ). The set for training and verification is set to be $\begin{array} { r l r } { \mathbb { X } } & { { } = } & { \left[ \left[ - [ 1 \ \mathrm { \ } 0 . 5 ] ^ { \intercal } , [ 1 \mathrm { \ } 0 . 5 ] ^ { \intercal } \right] \right] } \end{array}$ . We use the same $\alpha ( x )$ as the fiJrst example and $\begin{array} { r c l } { \overline { { g _ { \mathcal { X } } } } ( x _ { k } ) } & { = } & { \operatorname* { m a x } \big ( 1 ~ + ~ } \end{array}$ $( 1 / 8 ) ^ { 2 } - \left( ( x _ { 1 , k } - 0 . 2 5 ) ^ { 2 } + ( x _ { 2 , k } - 0 . 2 5 ) ^ { 2 } \right)$ , $1 + ( 1 / 8 ) ^ { 2 } -$ $\left( ( x _ { 1 , k } - 0 . 2 5 ) ^ { 2 } + ( x _ { 2 , k } + 0 . 2 5 ) ^ { 2 } \right)$ . With the same neural network structure and procedure as the ones in the previous example, both the quadratic Lyapunov function and neural network Lyapunov function can be obtained and verified. In this case, $V _ { P } = 2 1 . 9 3 7 7 x _ { 1 , k } ^ { 2 } + 3 3 . 6 3 2 1 x _ { 2 , k } ^ { 2 } + 2 1 . 6 8 1 6 x _ { 1 , k } x _ { 2 , k }$ . In this example, we have $c _ { 1 } = 0 . 2 1$ and $c _ { 2 } = 0 . 8 6$ . The safe # C. 4-dimensional Power System We consider a 4-dimensional two generator bus power system in [32, Chapter 5], as follows. $$ f ( x _ { k } ) = \left( \begin{array} { l } { x _ { 1 , k } + \Delta _ { t } x _ { 2 , k } } \\ { x _ { 2 , k } + \Delta _ { t } R _ { 2 , k } } \\ { x _ { 3 , k } + \Delta _ { t } x _ { 4 , k } } \\ { x _ { 4 , k } + \Delta _ { t } R _ { 4 , k } } \end{array} \right) , $$ where $R _ { 2 , k } = \left( - \alpha _ { 1 } \sin ( x _ { 1 , k } ) - \beta _ { 1 } \sin ( x _ { 1 , k } - x _ { 3 , k } ) - d _ { 1 } x _ { 2 , k } \right)$ and $R _ { 4 , k } = ( - \alpha _ { 2 } \sin ( x _ { 3 , k } ) - \beta _ { 2 } \sin ( x _ { 3 , k } - x _ { 1 , k } ) - d _ { 2 } x _ { 4 , k } ) $ . The parameters are defined as follows: $\alpha _ { 1 } = \alpha _ { 2 } = 1$ , $\beta _ { 1 } =$ $\beta _ { 2 } = 0 . 5$ , $d _ { 1 } = 0 . 4$ , $d _ { 2 } = 0 . 5$ , and the time step $\Delta _ { t } = 0 . 0 5$ . In this case, $\mathbb { X } = [ - 3 . 5 \ 3 . 5 ] ^ { 4 }$ , i.e., in each dimension, the range of the state is from $- 3 . 5$ to 3.5. Similar to the previous two examples, we learn a neural network Lyapunov function using a neural network with 2-hidden layers and 50 neurons each, while the quadratic Lyapunov function is also solved with $Q = I$ . The expression of $V _ { P }$ is omitted, but the constants are computed as $c _ { 1 } = 1 . 3 4$ and $c _ { 2 } = 1 4 0 . 6 2 5$ . For this highdimensional system, obstacles are not included, i.e., $\mathcal { X } = \mathbb { R } ^ { 4 }$ , as we mainly aim to illustrate the efficacy of the proposed method with formal guarantees provided by the more efficient verifier, $\alpha , \beta$ -CROWN. Due to the limited scalability of dReal, the learned neural network Lyapunov function could not be verified with LyZNet within a 7-day timeout. The verified result using $\alpha , \beta$ -CROWN is in Fig. 3.
Analysis of nonlinear autonomous systems typically involves estimating domains of attraction, which have been a topic of extensive research interest for decades. Despite that, accurately estimating domains of attraction for nonlinear systems remains a challenging task, where existing methods are conservative or limited to low-dimensional systems. The estimation becomes even more challenging when accounting for state constraints. In this work, we propose a framework to accurately estimate safe (state-constrained) domains of attraction for discrete-time autonomous nonlinear systems. In establishing this framework, we first derive a new Zubov equation, whose solution corresponds to the exact safe domain of attraction. The solution to the aforementioned Zubov equation is shown to be unique and continuous over the whole state space. We then present a physics-informed approach to approximating the solution of the Zubov equation using neural networks. To obtain certifiable estimates of the domain of attraction from the neural network approximate solutions, we propose a verification framework that can be implemented using standard verification tools (e.g., $\alpha,\!\beta$-CROWN and dReal). To illustrate its effectiveness, we demonstrate our approach through numerical examples concerning nonlinear systems with state constraints.
[ "eess.SY", "cs.AI", "cs.SY" ]
# 1 Introduction Graph Neural Networks (GNNs) have emerged as a robust machine learning paradigm to learn expressive representations of graph-structured data through message passing, exhibiting remarkable performance across various AI applications, such as molecular interactions[Huang et al., 2020]. However, most existing GNNs adopt a centralized training strategy where graph data need to be collected together before training. In practical industrial scenarios, large-scale graphs are collected and stored on edge devices. Meanwhile, regulations such as GDPR [Voigt and Von dem Bussche, 2017] highlight the importance of data privacy and impose restrictions on the transmission of local data, which often contains sensitive information. This has led to the exploration of leveraging collective intelligence through distributed data silos to enable collaboration in graph learning [Li et al., 2022a]. Figure 1: (a) The conventional subgraph-FL framework in the subgraph heterogeneity scenario where node colors represent different labels. (b) The condensation-based subgraph-FL framework, which trains a robust global model by integrating condensed knowledge. To this end, Federated Graph Learning (FGL) has been proposed, extending Federated Learning (FL) to graphstructured data. Its core idea is to harness collective intelligence for the collaborative training of powerful GNNs, thereby advancing AI-driven insights in federated systems. Given the diversity of graph-based downstream tasks, this paper focuses on subgraph-FL, the instance of FGL on a semisupervised node classification paradigm. To enhance understanding, we present case study within healthcare systems. Case Study. In different regions, residents visit various hospitals (e.g., independent clients), all of which are centrally managed by government organizations (e.g., the trusted server). Each hospital maintains a subgraph in its database, containing demographics, living conditions, and patient interactions. These subgraphs form a global patient network. Notably, due to privacy regulations, geographic isolation, and competitive concerns, centralized data storage is not feasible. Fortunately, subgraph-FL enables federated training through multi-client collaboration without direct data sharing. For tasks such as predicting the spread of infections during a pandemic [Bertozzi et al., 2020], developing a federated collaborative paradigm based on distributed scenarios is essential. Specifically, as illustrated in Fig.1(a), the iterative training process of conventional subgraph-FL consists of four steps: (1) all clients download the latest global model from the server; (2) each client trains the model on its privately stored subgraph; (3) after local training, the clients upload the model parameters or gradients to the central server; (4) the server aggregates the model parameters or gradients to update the global model, which is then broadcast back to clients. Based on this, most subgraph-FL approaches [Fu et al., 2022] rely on model parameters or gradients as the optimization carriers for federated training. These carriers are primarily derived from Computer Vision (CV)-based FL collaborative paradigm [Lim et al., 2020], which struggles to capture the client-specific local convergence tendencies due to the complex topology in FGL, thus failing to adequately address the unique challenges of subgraph heterogeneity [Baek et al., 2023]. Notably, current FGL methods [Li et al., 2024b; Li et al., 2024a] typically face a trade-off between optimization and privacy, as they seek to enhance model gradientbased federated convergence by sharing more messages. Further analysis and empirical studies will be provided in Sec. 3. Despite the significant contributions of existing methods, the inherent trade-off dilemma in CV-based federated optimization carriers limits the upper bound of FGL convergence, motivating us to propose a new collaborative paradigm. Inspired by Graph Condensation (GC) [Zhao et al., 2020; Jin et al., 2021], a comprehensive and explicit approach based on condensed graphs holds promise in addressing the aforementioned limitations. Specifically, condensed graphs can effectively capture the complex relationships between nodes and topology, providing a more suitable means of information transmission. This implies that we can use the condensed graph as the optimization carrier for FGL, replacing traditional model parameters or gradients. Furthermore, a recent study [Dong et al., 2022] suggests that data condensation via gradient matching can safeguard privacy. This makes condensed graph-based approaches highly promising as a robust framework for FGL data heterogeneity. The core idea of our method is to integrate generalized condensation subgraph consensus to acquire comprehensive and reliable knowledge. In this paper, we propose a new FGL paradigm based on condensed subgraphs, as illustrated in Fig.1(b): (1) Each client performs local subgraph condensation using gradient matching and then uploads the condensed subgraph to the server; (2) The central server optimizes the condensed knowledge from a global perspective, resulting in a global-level condensed graph; (3) The server then trains a robust global model on the condensed graph and returns it to the clients. Based on our proposed FGL paradigm, we introduce FedGM, a specialized dual-stage framework, as follows: Stage 1: Each client independently performs local subgraph condensation through gradient matching between the real subgraph and the condensed subgraph, without any communication, and then uploads the condensed knowledge to the server. Notably, multiple clients execute data condensation locally and in parallel. The central server then integrates these condensed subgraphs into a global-level condensed graph. It implies that Stage 1 is completed. And clients and the server only need to perform a single communication round to upload the locally condensed subgraphs, making this a one-shot FGL process. Stage 2: To achieve performance comparable to directly training on the implicit global real graph, FedGM employs federated gradient matching to optimize the condensed features. This approach leverages global class-wise knowledge to reinforce and consolidate the condensation consensus through multiple rounds of federated optimization. Subsequently, the central server trains a global robust GNN using the global condensed graph and distributes it to all clients. Our contributions are as follows: (1) New Framework. To the best of our knowledge, we are the first to introduce condensed graphs as a novel optimization carrier to address the challenge of subgraph heterogeneity. (2) New Paradigm. We propose FedGM, a dual-stage paradigm that integrates generalized condensed subgraph consensus to obtain comprehensive knowledge while minimizing communication costs and reducing the risk of privacy breaches through a single transmission of condensed data between clients and the server. (3) SOTA Performance. Extensive experiments on six datasets demonstrate the consistent superiority of FedGM over stateof-the-art baselines, with improvements of up to $4 . 3 \%$ . # 2 Notations and Problem Formalization 2.1 Notations Graph Neural Networks. Consider a graph $G = \{ { \bf A } , { \bf X } , { \bf Y } \}$ consisting of $N$ nodes, where $\textbf { X } \ \in \ \mathbb { R } ^ { N \times d }$ is the $d$ - dimensional node feature matrix and $\mathbf { Y } \in \{ 1 , . . . , C \} ^ { N }$ denotes the node labels over $C$ classes. $\mathbf { A } \in \mathbb { R } ^ { N \times N }$ is th}e adjacency matrix, with entry $\mathbf { A } _ { i , j } > 0$ denoting an observed edge from node $i$ to $j$ , and $\mathbf { A } _ { i , j } = 0$ otherwise. Building upon this, most GNNs can be subsumed into the deep message-passing framework [Wu et al., 2020a]. We use graph convolutional network (GCN) [Zhang et al., 2019] as an example, where the propagation process in the $\ell$ -th layer is as follows: $$ \mathbf { H } ^ { ( \ell ) } = \mathrm { R e L U } \left( \hat { \mathbf { A } } \mathbf { H } ^ { ( \ell - 1 ) } \mathbf { W } ^ { ( \ell ) } \right) , $$ where $\hat { \mathbf { A } } = \tilde { \mathbf { D } } ^ { - \frac { 1 } { 2 } } \tilde { \mathbf { A } } \tilde { \mathbf { D } } ^ { - \frac { 1 } { 2 } }$ is the normalized adjacency matrix. $\tilde { \mathbf { A } }$ is the adjacency matrix with the self-loop, $\tilde { \mathbf { D } }$ is the degree matrix and $\mathbf { W } ^ { ( \ell ) }$ is the trainable weights at layer $\ell$ . $\mathbf { H } ^ { ( \bar { \ell } ) }$ is the output node embeddings from the $\ell$ -th layer. Graph Condensation. Graph condensation is proposed to learn a synthetic graph with $N ^ { \prime } \ll N$ nodes from the real graph G, denoted by Sk = {A′, X′, Y′} with A′ ∈ RN ′×N ′, $\mathbf { X } ^ { \prime } \in \mathbb { R } ^ { N ^ { \prime } \times d }$ , $\mathbf { Y } ^ { \prime } \in \{ 1 , . . . , C \} ^ { N ^ { \prime } }$ , such that a GNN $f ( \cdot )$ solely trained on $s$ can achieve comparable performance to the one trained on the original graph. In other words, graph condensation can be considered as a process of minimizing the loss defined on the models trained on the real graph $G$ and the synthetic graph $s$ : $$ \begin{array} { r } { \mathcal { S } = \underset { \mathcal { S } } { \arg \operatorname* { m i n } } \mathcal { L } ( \mathbf { G } \mathbf { N } \mathbf { N } _ { \theta _ { \mathcal { S } } } ( G ) , \mathbf { G } \mathbf { N } \mathbf { N } _ { \theta _ { G } } ( G ) ) , } \end{array} $$ where $\mathrm { G N N } _ { \theta _ { S } }$ and $\mathrm { G N N } _ { \theta _ { G } }$ denote the GNN models trained on $s$ and $G$ , respectively; $\mathcal { L }$ represents the loss function used to measure the difference of these two models. Subgraph Federated Learning. In subgraph-FL, the $k$ - th client has a subgraph $G _ { k } \ = \ \{ { \bf A } _ { k } , { \bf X } _ { k } , { \bf Y } _ { k } \}$ of an implicit global graph $G _ { g l o } = \{ { \bf A } _ { g l o } , { \bf X } _ { g l o } , { \bf Y } _ { g l o } \}$ (i.e., $\mathbf { A } _ { k } \subseteq$ $\mathbf { A } _ { g l o } , \mathbf { X } _ { k } \subseteq \mathbf { X } _ { g l o } , \mathbf { Y } _ { k } \subseteq \mathbf { Y } _ { g l o } )$ . Each subgraph consists of $N _ { k }$ nodes, where $\mathbf { X } _ { k } \in \mathbb { R } ^ { \bar { N } _ { k } \times d }$ is the $d$ -dimensional node feature matrix and $\mathbf { Y _ { k } } \in \{ 1 , . . . , C \} ^ { N _ { k } }$ denotes the node labels over $C$ classes. Typically, the training process for the $t$ - th communication round in subgraph-FL with the FedAvg aggregation can be described as follows: (i) Initialization: This step occurs only at the first communication round $( t = 1$ ). The server sets the local GNN parameters of $k$ clients to the global GNN parameters $\bar { \theta }$ , using $\theta _ { k } \gets \bar { \theta } \forall k$ . (ii) Local $U p$ - dates: Each local GNN performs training on the local data $G _ { k }$ to minimize the task loss $\mathcal { L } ( G _ { k } ; \theta _ { k } )$ , and then updating the parameters: ${ \theta _ { k } } \gets { \theta _ { k } } - \eta \nabla \mathcal L$ . (iii) Global Aggregation: After local training, the server aggregates local knowledge with respect to the number of training instances, i.e., $\begin{array} { r } { \bar { \theta } \dot { } \frac { N _ { k } } { N } \sum _ { k = 1 } ^ { K } \bar { \theta } _ { k } } \end{array}$ with $\begin{array} { r } { N = \sum _ { k } N _ { k } } \end{array}$ , and distributes the updated global parameters $\bar { \theta }$ to clients selected at the next round. # 2.2 Problem Formalization The proposed condensation-based Subgraph-FL framework is as follows: Firstly, each client performs local subgraph condensation, then uploads condensed knowledge to the server. Specifically, suppose that a client $k$ is tasked with learning a local condensed subgraph with $N ^ { \prime } ~ < ~ N$ nodes from the real subgraph $G _ { k }$ , denoted as ${ \cal { S } } _ { k } = \{ { \bf { A } } _ { k } ^ { \prime } , { \bf { X } } _ { k } ^ { \prime } , { \bf { Y } } _ { k } ^ { \prime } \}$ with $\mathbf { A } _ { k } ^ { \prime } \in \mathbb { R } ^ { N ^ { \prime } \times N ^ { \prime } }$ , $\mathbf { X } _ { k } ^ { \prime } \in \mathbb { R } ^ { N ^ { \prime } \times d }$ , $\mathbf { Y } _ { k } ^ { \prime } \in \{ 1 , . . . , C \} ^ { N ^ { \prime } }$ . The central server leverages the global perspective to optimize the condensed subgraphs, resulting in a global-level condensed graph $S _ { g l o } = \bar { \{ \mathbf { A } _ { g l o } ^ { \prime } , \mathbf { X } _ { g l o } ^ { \prime } , \mathbf { Y } _ { g l o } ^ { \prime } \} }$ . Subsequently, the server trains a robust global model on the condensed graph and returns it to the clients. The motivation for this design is to obtain a powerful model trained on the condensed graph $ { \boldsymbol { S } } _ { g l o }$ , achieving performance comparable to one directly trained on the implicit global graph $G _ { g l o }$ : $$ \begin{array} { r l } & { \underset { \pmb { \mathscr { S } } } { \operatorname* { m i n } } \mathcal { L } ( \mathrm { G N N } _ { \theta _ { \cal S } } ( \mathbf { A } _ { g l o } , \mathbf { X } _ { g l o } ) , \mathbf { Y } _ { g l o } ) } \\ & { \mathrm { s . t } \theta _ { \cal S } = \underset { \theta } { \arg \operatorname* { m i n } } \mathcal { L } ( \mathrm { G N N } _ { \theta } ( \mathbf { A } _ { g l o } ^ { \prime } , \mathbf { X } _ { g l o } ^ { \prime } ) , \mathbf { Y } _ { g l o } ^ { \prime } ) } \end{array} $$ where $\operatorname { G N N } _ { \theta }$ denotes the GNN model parameterized with $\theta$ $\theta _ { S }$ denotes the parameters of the model trained on $ { \boldsymbol { S } } _ { g l o }$ , and $\mathcal { L }$ is the loss function used to measure the difference between model predictions and ground truth (i.e. cross-entropy loss). # 3 Empirical Analysis In this section, we empirically explore subgraph heterogeneity to investigate the FGL optimization dilemma. According to our observations, existing methods have the following limitations: (i) Using model parameters or gradients as primary information carriers overlooks the complex interplay between features and topology within local heterogeneous subgraphs, resulting in sub-optimal performance; (ii) many existing methods require the upload of additional information (e.g., subgraph embeddings, or mixed moments), raising privacy concerns. The in-depth analysis is provided as follows. In CV-based FL, data heterogeneity refers to variations among clients in terms of features, labels, data quality, and data quantity [Qu et al., 2022; Li et al., 2022b]. This variability presents substantial challenges for effective federated training and optimization. In this work, we focus on the heterogeneity of features and labels, considering their strong correlation, widely highlighted by practical applications. Unlike data heterogeneity in conventional FL, subgraph heterogeneity is influenced by diverse topologies across clients [Li et al., 2024a]. According to the homophily assumption [Wu et al., 2020b], connected nodes tend to share similar feature distributions and labels. However, with the increasing deployment of GNNs in real-world applications, topology heterophily has emerged [Zhu et al., 2021; Luan et al., 2022], where the connected nodes exhibit contrasting attributes. Due to community-based subgraph data collection methods (i.e., the community can be viewed as the client), subgraphs from different communities often exhibit diverse topological structures, leading to Non-independent and identically distributed labels, as shown in Fig.2(a). Specifically, we observe strong homophily at Client 1 in Cora and CiteSeer, as the majority of nodes belong to the same label class. In contrast, at Cora-Client 3 and CiteSeer-Client 10, we observe the presence of heterophily, as label distributions approach uniformity. In summary, clients often exhibit diverse label distributions and topologies, a characteristic of distributed graphs that demands attention. Conventional federated training, which neglects subgraph heterogeneity, leads to underperformance. Figure 2: (a) Label distribution based on random data split, where the color gradient from white to blue indicates the increasing number of nodes held by different clients in each class. (b) Optimization performance of various methods under subgraph heterogeneity scenarios. The x-axis of the line plot represents federated training rounds, with ”Local” indicating model performance in siloed settings. To address these issues, FED-PUB [Baek et al., 2023] measures subgraph similarity by transmitting subgraph embeddings for personalized aggregation. FedGTA [Li et al., 2024b] shares mixed moments and local smoothing confidence for topology-aware aggregation. FedSage+[Zhang et al., 2021] and FedGNN[Wu et al., 2021] aim to reconstruct potentially missing edges among clients, thereby aligning local objectives at the data level. FedTAD [Zhu et al., 2024] introduces topology-aware, data-free knowledge distillation. Despite the considerable efforts of subgraph heterogeneity, these methods rely on uploading additional information, leading to privacy concerns and higher communication overhead. One-Shot Federated Communication Multi-Round Federated Communication Client 國T 01 Seresed Subgraphs Integration Server Client S1 Backward G1=xyx Grngx) Cl=( agox xo 02 0t Propogto X1 Y R 8 X subgraph condensation model (02) 区 Xglo Y Yg Initialize 0t 02 A x3 Y for each round t O ? S2 Condensed Feature Condensed Label VθLSglo.c VθCGK. VθtCGK.C MLP(Φ1) 目 LGM G2 = (A2.x2,Yx² =x A'glq GlobalGradient Matching 3 T . 日 S3 Condensed Topology Crocast G = {A3,X3,Y3 rd=x) Stage1: Condense GraphGetiration @@ Ygto MGldel () MLdel (w) Considering the capability of condensed graphs to capture complex node-to-topology relationships while preserving privacy, we propose the condensation-based subgraphFL framework. To validate the effectiveness of condensed knowledge as an optimization carrier, we conduct experiments using GCN in a federated learning setting with 10 clients across two common datasets, Cora [Kipf and Welling, 2016a] and CiteSeer [Kipf and Welling, 2016a], as shown in Fig.2(b). The results demonstrate improved model performance and faster convergence of the condensation-based subgraph-FL method. Superior performance indicates a better capability to tackle subgraph heterogeneity, while faster convergence translates into reduced communication costs. # 4 Method The overview of our proposed FedGM is depicted in Fig.3. In Stage 1, each client involves a local process of standard graph condensation by one-step gradient matching and uploads condensed subgraphs to the central server. The server subsequently integrates these into a global-level graph. In Stage 2, we introduce federated optimization and perform multiple rounds of communication to leverage the class-wise knowledge, enhancing the quality of the condensed features. # 4.1 Stage 1: Condensed Graph Generation In the first stage, our task is to integrate the condensation consensus from clients to generate a global condensed graph. Since the real graph is distributed among multiple participants, direct access to the real graph to obtain the condensed graph, as in Eq.(3), is prohibited. Therefore, we perform subgraph condensation on each client to achieve local optimization and then integrate these subgraphs on the server via a single round of federated communication. Considering privacy protection requirements and condensation quality, we adopt the gradient alignment advocated by [Jin et al., 2021; Jin et al., 2022] as the local learning task. Unlike GraphGAN [Wang et al., 2018] and GraphVAE [Simonovsky and Komodakis, 2018], which synthesize high-fidelity graphs by capturing data distribution, its goal is to generate informative graphs for training GNNs rather than “real-looking” graphs. Client-Side Subgraph Condensation. The FedGM aims to provide a flexible graph learning paradigm by enabling each client to perform local subgraph condensation under local conditions without requiring real-time synchronization. The client generate the condensed subgraph through one-step gradient matching[Jin et al., 2021; Jin et al., 2022], where a GNN is updated using real subgraph and condensed subgraph, respectively, and their resultant gradients are encouraged to be consistent, as show on the left side of Fig.4. The local optimization objective can be formulated as: $$ \operatorname* { m i n } _ { S _ { k } } E _ { \theta _ { \mathbf { k } } \sim P _ { \theta _ { k } } } [ D ( \bigtriangledown \theta _ { k } \mathcal { L } _ { 1 } , \bigtriangledown \theta _ { k } \mathcal { L } _ { 2 } ) ] , $$ $$ \mathcal { L } _ { 1 } = \mathcal { L } ( G N N _ { \theta _ { k } } ( \mathbf { A } _ { k } ^ { \prime } , \mathbf { X } _ { k } ^ { \prime } ) , \mathbf { Y } _ { k } ^ { \prime } ) , $$ $$ \begin{array} { r } { \mathcal { L } _ { 2 } = \mathcal { L } ( G N N _ { \theta _ { k } } ( { \bf A } _ { k } , { \bf X } _ { k } ) , { \bf Y } _ { k } ) , } \end{array} $$ where $D ( \cdot , \cdot )$ represents a distance function, and the subgraph condensation model parameters $\theta _ { k }$ for client $k$ are initialized from the distribution of random initialization $P _ { \theta _ { k } }$ . In each condensation round, each client initializes its subgraph condensation model to calculate the gradients for the real subgraph and the condensed subgraph. By taking different parameter initializations drawn from the distribution $P _ { \theta _ { k } }$ , the learned $\boldsymbol { \mathcal { S } } _ { k }$ can avoid over fitting a specific initialization. To facilitate efficient learning of $\boldsymbol { \mathcal { S } } _ { k }$ , a common practice is to reduce the trainable pieces in the condensed subgraph ${ \cal S } _ { k } = \{ { \bf A } _ { k } ^ { \prime } , { \bf X } _ { k } ^ { \prime } , { \bf Y } _ { k } ^ { \prime } \}$ to only the node features $\mathbf { X } _ { k } ^ { \prime }$ . Concretely, the labels $\ddot { \mathbf { Y } } _ { k } ^ { \prime }$ can be predefined to match the distribution of different classes in the real subgraph, while each client condenses the graph structure by leveraging a function to parameterize the adjacency matrix $\mathbf { A } _ { k } ^ { \prime }$ to prevent overlooking the implicit correlations between condensed node features and condensed structure [Jin et al., 2021]: Figure 4: This is the representation of the local and global gradient matching in the model parameter space. The gradient matching iteratively optimizes the condensed data by minimizing the distance between gradients generated by the real and condensed data on the model, ultimately aligning the low-loss region of the condensed data within the low-loss region of the real data. The blue intersecting region in the right panel represents shared intra-class knowledge. $$ \begin{array} { r } { \mathbf { A } _ { i j } ^ { \prime } = \sigma ( [ \mathbf { M L P } _ { \Phi } ( [ \mathbf { x } _ { i } ^ { \prime } ; \mathbf { x } _ { j } ^ { \prime } ] ) + \mathbf { M L P } _ { \Phi } ( [ \mathbf { x } _ { j } ^ { \prime } ; \mathbf { x } _ { i } ^ { \prime } ] ) ] / 2 ) , } \end{array} $$ where $\mathrm { M L P _ { \Phi } }$ is a multi-layer perceptron (MLP) parameterized with $\Phi$ , $[ \cdot ; \cdot ]$ indicates concatenation and $\sigma$ is the sigmoid function. The client condenses the subgraph by optimizing alternately $\mathbf { X } _ { k } ^ { \prime }$ and $\Phi _ { k }$ . After condensation, the client transfers $\boldsymbol { \mathcal { S } } _ { k }$ to the server via one-shot federated communication. Server-Side Condensed Subgraphs Integration. To construct a global condensed graph, the server concatenates the features and labels from each client’s condensed subgraph: $$ \mathbf { X } _ { \mathrm { g l o } } ^ { \prime } = \left[ \begin{array} { c c c c c c } { \mathbf { X } _ { 1 } ^ { \prime } } \\ { \mathbf { X } _ { 2 } ^ { \prime } } \\ { \vdots } \\ { \mathbf { X } _ { K } ^ { \prime } } \end{array} \right] , \quad \mathbf { Y } _ { \mathrm { g l o } } ^ { \prime } = \left[ \begin{array} { c c c c c c } { \mathbf { Y } _ { 1 } ^ { \prime } } \\ { \mathbf { Y } _ { 2 } ^ { \prime } } \\ { \vdots } \\ { \mathbf { Y } _ { K } ^ { \prime } } \end{array} \right] , $$ where $\mathbf { X } _ { k } ^ { \prime }$ and $\mathbf { Y } _ { k } ^ { \prime }$ represent the condened features and labels from client $k$ , respectively. Unlike real-world graph structures, the condensed topology lacks tangible meaning, which is only relevant to the passage of condensed knowledge within the GNNs. To avoid disrupting the knowledge representing each client’s subgraph within condensed data, we retain the topology of each condensed subgraph. Specifically, the global condensed adjacency matrix $\mathbf { A } _ { \mathrm { g l o } } ^ { \prime }$ is represented as: $$ \mathbf { A } _ { \mathrm { g l o } } ^ { \prime } [ i , j ] = \binom { \mathbf { A } _ { k } ^ { \prime } [ i , j ] , \mathrm { ~ i f ~ } i , j \in \mathcal { V } _ { k } ; } { 0 , \mathrm { o t h e r w i s e } , } $$ where $\nu _ { k }$ denotes the set of nodes from client $k$ . Consequently, we obtain an initial global condensed graph, consisting of multiple connected components. However, there remains a significant gap between the quality of the condensed graph and our desired target due to the limitations of the narrow local scope. Therefore, it is crucial to effectively optimize the condensed graph by leveraging a global perspective. # 4.2 Stage 2: Condensed Graph Optimization There is a common phenomenon of class imbalance at the client level in scenarios with subgraph heterogeneity, which obviously leads to poorer feature quality for condensed nodes, especially for the minority classes. We observe that majority classes on one client often correspond to minority classes on other clients in scenarios with subgraph heterogeneity, as shown in Fig. 1(b). Therefore, we aim to collect the class-wise gradients generated by each subgraph and perform federated gradient matching to optimize the condensed features. Our intuition is that the shared intra-class knowledge between clients provides a basis based on the global perspective, which iteratively reduces the gap between the condensed graph and the real graph via class-wise gradient matching, as illustrated on the right side of Fig. 4. In the second stage, FedGM introduces condensed graph optimization, which is performed over multi-round federated communication. In the $t$ -th iteration, the server samples the gradient generation model parameters $\theta _ { t }$ from random initialization distribution $P _ { \theta _ { t } }$ and sends it to each client. Clinet-Side Gradient Generation. On each client, the real subgraphs generate class-wise gradients through the gradient generation model. For class $c$ , and the generated gradient is: $$ \begin{array} { r } { \nabla \theta _ { t } \mathcal { L } ^ { G _ { k , c } } = \nabla \theta _ { t } \mathcal { L } ( G N N _ { \theta _ { t } } ( { \mathbf A } _ { k , c } , { \mathbf X } _ { k , c } ) , { \mathbf Y } _ { c } ) , } \end{array} $$ where $\mathbf { A } _ { k , c }$ denote the adjacency matrix composed of the $c$ - class nodes and their neighbors and $\mathbf { X } _ { k , c }$ denote the features corresponding to the $c$ -class nodes. To ease the presentation, we adopt the following nations for every client $k$ : $$ \nabla \theta _ { t } \mathcal { L } ^ { G _ { k } } = [ \nabla \theta _ { t } \mathcal { L } ^ { G _ { k , 1 } } , \nabla \theta _ { t } \mathcal { L } ^ { G _ { k , 2 } } , . . . , \nabla \theta _ { t } \mathcal { L } ^ { G _ { k , C } } ] . $$ Upon having the class-wise gradients obtained, clients share the generated gradients $\nabla { \boldsymbol { \theta } } _ { t } \bar { \mathcal { L } } ^ { G _ { k } }$ with the server. Sever-Side Gradient Matching. For class $c$ , the server calculates the gradient generated by the implicit global graph based on the number of $c$ -class condensed nodes from clients: $$ \nabla _ { \theta _ { t } } \mathcal { L } ^ { G _ { c } } = \sum _ { k = 1 } ^ { K } \frac { N _ { k , c } ^ { \prime } } { N _ { c } ^ { \prime } } \nabla \theta _ { t } \mathcal { L } ^ { G _ { k , c } } . $$ The gradients generated by the condensed graph is as follows: $$ \begin{array} { r } { \nabla _ { \theta _ { t } } \mathcal { L } ^ { S _ { g l o , c } } = \nabla \theta _ { t } \mathcal { L } ( G N N _ { \theta _ { t } } ( \mathbf { A } _ { g l o } ^ { \prime } , \mathbf { X } _ { g l o } ^ { \prime } ) , \mathbf { Y } _ { c } ^ { \prime } ) , } \end{array} $$ Then we empower the server to match the gradients of the implicit global graph and the condensed graph for each category using the distance function. And we simplify our objective as $$ \operatorname* { m i n } _ { { \bf X } _ { \mathrm { g l o } } ^ { \prime } } E _ { \theta _ { \bf t } \sim P _ { \theta _ { t } } } [ D ( \nabla _ { \theta _ { t } } \mathcal { L } ^ { G } , \nabla _ { \theta _ { t } } \mathcal { L } ^ { S _ { \mathrm { g l o } } } ) ] , $$ $\nabla \theta _ { t } \mathcal { L } ^ { G }$ and $\bigtriangledown \theta _ { t } { \mathcal { L } } ^ { S _ { \mathrm { g l o } } }$ in the above equation are defined as $$ \begin{array} { r l } & { \nabla _ { \theta _ { t } } \mathcal { L } ^ { G } = [ \nabla _ { \theta _ { t } } \mathcal { L } ^ { G _ { 1 } } , \nabla _ { \theta _ { t } } \mathcal { L } ^ { G _ { 2 } } , . . . , \nabla _ { \theta _ { t } } \mathcal { L } ^ { G _ { C } } ] , } \\ & { \nabla _ { \theta _ { t } } \mathcal { L } ^ { S _ { \mathrm { g l o } } } = [ \nabla _ { \theta _ { t } } \mathcal { L } ^ { S _ { g l o , 1 } } , \nabla _ { \theta _ { t } } \mathcal { L } ^ { S _ { g l o , 2 } } , . . . , \nabla _ { \theta _ { t } } \mathcal { L } ^ { S _ { g l o , C } } ] . } \end{array} $$ In the multi-round communication process, the central server optimizes the features using the class-specific gradients, ultimately obtaining the desired condensed graph. The secondstage method of FedGM is presented in Algorithm 1. After federated optimization, the server trains the global model on the condensed graph and sends the final model back to clients. # Algorithm 1 FedGM-Condensed Graph Optimization Input: Rounds, $T$ ; Local real subgraphs, $\{ G _ { k } \} _ { k = 1 } ^ { K }$ ; Initial Outputc:oOndpteinmsiezdedgrcaopnh,d $\displaystyle { \cal S } _ { g l o }$ d graph, $\boldsymbol { S _ { g l o } ^ { \prime } }$ /\* Client Execution \*/ 1 for each communication round $t = 1 , . . . , T$ do 2 Update the gradient generation model $\theta _ { t }$ ; for each class $c = 1 , . . . , C$ do 3 Sample on the real subgraph according to the class $c$ $( \hat { A _ { k , c } } , \hat { X _ { k , c } } , Y _ { k , c } ) \sim \hat { G } _ { k }$ ; 4 Calculate loss and get the gradient via Eq.10 5 Upload the number of samples of each category and the corresponding gradient to the central server /\* Server Execution $\star /$ 6 for each communication round $t = 1 , . . . , T$ do 7 initialize $\theta _ { t } \sim P _ { \theta _ { t } }$ ; 8 for each client $k = 1 , . . . , K$ do 9 Send the gradient generation model $\theta _ { t }$ to client $\mathbf { k }$ ; 10 Receive the number of class samples and the corresponding gradient 11 for each class $c = 1 , . . . , C$ do 12 Calculate the condensed graph gradient via Eq.13; 13 Calculate the real graph gradient via Eq.12; 14 Update the condensed features $\mathbf { X } _ { g l o } ^ { \prime }$ via Eq.14; # 5 Experiments In this section, we conduct experiments to verify the effectiveness of FedGM. We introduce 6 benchmark graph datasets across 5 domains and the simulation strategy for the subgraph-FL scenario. And we present 8 evaluated state-ofthe-art baselines. Specifically, we aim to answer the following questions: Q1: Compared with other state-of-the-art federated optimization strategies, can FedGM achieve better performance? Q2: Where does the performance gain of FedGM come from? Q3: Is FedGM sensitive to the hyperparameters? Q4: What is the time complexity of FedGM? # 5.1 Datasets and Simulation Method We evaluate FedGM on six public benchmark graph datasets across five domains, including two citation networks (Cora, Citeseer) [Kipf and Welling, 2016a], one co-authorship network (CS) [Shchur et al., 2018], one co-purchase network (Amazon Photo), one task interaction network (Tolokers) [Platonov et al., 2023], and one social network (Actor) [Tang et al., 2009]. More details can be found in Table 1. To simulate the distributed subgraphs in subgraph-FL, we employ the Louvain algorithm [Blondel et al., 2008] to achieve graph partitioning across 10 clients, which is based on modularity optimization and widely used in the subgraph-FL fields. # 5.2 Baselines and Experimental Settings Baselines. We compare the proposed FedGM with four conventional FL optimization strategies (FedAvg [McMahan et al., 2017], FedProx [Li et al., 2020], SCAFFOLD [Karimireddy et al., 2020], MOON [Li et al., 2021]), two personalized subgraph-FL optimization strategies (Fed-PUB[Baek et al., 2023], FedGTA[Li et al., 2024b]), one subgraph-FL optimization strategy (FedTAD [Zhu et al., 2024]), and one subgraph-FL framework (FedSage $^ +$ [Zhang et al., 2021]). Table 1: Statistics of the six public benchmark graph datasets. Hyperparameters. For conventional framework, we employ a 2-layer GCN [Kipf and Welling, 2016b] with 256 hidden units as the backbone for both the clients and the central server. The local training epoch is set to 3. Notably, modelspecific baselines such as FedSage $^ +$ [Zhang et al., 2021] adhere to the custom architectures specified in their original papers. In the FedGM framework, the local subgraph condensation model, gradient generation model, and the model employed for evaluation are all implemented as 2-layer GCNs with 256 hidden units, and the condensed graph structure generation model is implemented as 3-layer MLP with 128 hidden units. In the first stage, the number of local condensation epochs is 1000. Based on this, we perform the hyperparameter search for FedGM using the Optuna framework [Akiba et al., 2019] on the ratio $r$ of condensed nodes to real nodes within the ranges of 0 to 1. For all methods, the learning rate for the GNN is set to 1e-2, the weight decay is set to 5e-4, and the dropout rate is set to 0.0. The federated training is conducted over 100 rounds. For each experiment, we report the mean and variance results of 3 standardized training runs. # 5.3 Results and Analysis Result 1: the answer to Q1. The comparison results are presented in Table 2. According to observations, the proposed FedGM overall outperforms the baseline. Specifically, compared with FedAvg, FedGM brings at most $4 . 3 \%$ performance improvement; Compared with Fed-PUB, FedGM can achieve a performance improvement of at most $4 . 1 \%$ . Moreover, FedGM consistently outperforms existing SOTA methods in varying numbers of clients. Notably, as the number of clients increases, the performance advantage of FedGM becomes more pronounced. Specifically, with 20 clients, FedGM achieves a $1 3 . 4 \%$ performance improvement over FedAvg, as shown in Fig.5. The convergence curves of FedGM and baselines are shown in Fig.2(b). It is observed that FedGM has a good effect at the beginning of federated communication, which shows that FedGM is suitable for subgraph-FL scenarios with limited communication overhead. In addition, FedGM demonstrates robust performance stability across various client settings, with accuracy fluctuations remaining within a margin of $2 \%$ as shown in Fig.5. Result 2: the answer to Q2. Stage 2 builds upon the foundation established in Stage 1. To answer Q2, we conducted an ablation study to investigate the effectiveness of both Stage 1 and Stage 2, as shown in Table.2. After Stage 1, our method achieves an average accuracy of $7 2 . 9 2 \%$ , surpassing other state-of-the-art methods on two datasets, demonstrating the feasibility of the condensation-based FGL paradigm. In addition, our method consistently achieves superior performance compared to its single-stage variant across all datasets (e.g., increasing from $8 4 . 9 1 \%$ to $9 0 . 8 5 \%$ and from $7 2 . 9 2 \%$ to $7 4 . 5 6 \%$ ), highlighting the pivotal role of the second stage in enhancing the model’s representational capacity and robustness. This further validates that leveraging global intra-class knowledge contributes to improving the quality of the condensed graphs, reinforcing the efficacy of FedGM. Table 2: Performance comparison of FedGM and baselines, where the best and second results are highlighted in bold and underline Figure 5: Performance of FedGM with different numbers of clients. Figure 6: Sensitive analysis for condensation ratio $r$ Result 3: the answer to Q3. To answer Q3, we assess the performance of FedGM under diverse condensation ratios. The sensitivity analysis on the Cora and CiteSeer datasets is presented in Fig.6. Overall, most values cluster near the maximum, reflecting consistently high accuracy under the majority of conditions. FedGM is insensitive to the condensation ratio, and there is no significant dependence between performance and condensation ratio $r$ . Result 4: the answer to Q4. To answer Q4, we provide the complexity analysis of FedGM. In Stage 1, condensation graph generation costs $\mathcal { O } ( r M )$ , where $r$ denotes the condensation ratio, and $M$ denotes the size of the labeled dataset. A single transmission of condensed data implies a lower risk of privacy leakage. In Stage 2, condensation graph optimization costs $\mathcal { O } ( K T N _ { \Theta _ { G N N } } )$ , where $K$ denotes the number of participating clients, $T$ denotes the number of the federal communication rounds, and $\Theta _ { G N N }$ denotes the size of GNN gradients or parameters associated with gradient matching. Notably, FedAvg represents the lowest communication cost among federated learning processes, and its time complexity is also $\mathcal { O } ( K T N _ { \Theta _ { G N N } } )$ . Unlike FedAvg, where the shared model parameters represent a trained GNN model, FedGM leverages the shared parameters primarily for generating gradients rather than direct model deployment. This distinction implies a reduced privacy risk during the federated process.
Federated graph learning is a widely recognized technique that promotes collaborative training of graph neural networks (GNNs) by multi-client graphs.However, existing approaches heavily rely on the communication of model parameters or gradients for federated optimization and fail to adequately address the data heterogeneity introduced by intricate and diverse graph distributions. Although some methods attempt to share additional messages among the server and clients to improve federated convergence during communication, they introduce significant privacy risks and increase communication overhead. To address these issues, we introduce the concept of a condensed graph as a novel optimization carrier to address FGL data heterogeneity and propose a new FGL paradigm called FedGM. Specifically, we utilize a generalized condensation graph consensus to aggregate comprehensive knowledge from distributed graphs, while minimizing communication costs and privacy risks through a single transmission of the condensed data. Extensive experiments on six public datasets consistently demonstrate the superiority of FedGM over state-of-the-art baselines, highlighting its potential for a novel FGL paradigm.
[ "cs.LG", "cs.AI", "cs.DB", "cs.SI" ]
# 1 INTRODUCTION Relying on natural language in software requirements often leads to ambiguity. In addition, requirements which are not expressed in a formal mathematical notation cannot be guaranteed through formal verification techniques as required to meet standards e.g. [9, 11, 70] in safety critical software. Expressing requirements in formal notation requires training in the domain of requirements engineering, as well as knowledge of formal notation and associated proof methods, often increasing the software development cycle time by a factor of $3 0 \%$ [44]. In this project, we aim to ease the burden of writing specification, helping to bridge the gap between the need for formal verification techniques and their lack of use in the software industry (due to the fast pace of the industry environment). Formalising software requirements ensures clarity, correctness and verifiability, and requires formal specification languages, logic and verification techniques, such as theorem proving and model-checking, to guarantee the correctness of the software system under construction. Like all other fields, the development of Large Language Models (LLMs) has opened a world of opportunities where we can exploit their power to generate formal requirements and accompanying specifications. In this paper, we present the results of our structured literature review which examines how large language models are currently used to assist in writing formal specifications. Main research questions for conducting systematic literature review on the topic are as follows: RQ1: What methodologies leverage Large Language Models (LLMs) to transform natural language software requirements into formal notations? RQ2: What are the emerging trends and future research directions in using LLMs for software requirements formalisa tion? The structure of the paper is organized as follows. Section 2 presents the methodology used to select the relevant literature. Section 3 provides a brief overview of works focused on the formalisation of software requirements using large language models (LLMs) addressing RQ1. Literature concerning the traceability of software requirements is discussed in Section 4. Section 5 lists down well-known formal notations developed over the last three decades of research by the formal method community and its related tools. Section 6 explores studies involving formal proofs within the frameworks of Unifying Theories of Programming (UTP) and the Theory of Institutions. Section 7 presents future directions based on our findings and the discussion on chain of thought and prompt engineering in sub-section 7.1, addressing RQ2. Concluding remarks are provided in Section 8. # 2 METHODOLOGY FOR LITERATURE REVIEW To conduct a structured and thorough review of literature on Natural Language Processing (NLP), Large Language Models (LLMs), and their use in software requirements, the following approach is followed. Several academic databases, including IEEE Xplore, ACM Digital Library, Scopus, Springer Link, and Google Scholar, are searched using specific keywords. The core search terms include “NLP,” “LLMs,” and “Software Requirements,” with broader terms such as “specification,” “logic,” “verification,” “model checking,” and “theorem proving” used to expand the scope. The number of results differs notably across databases. For example, IEEE Xplore returns 17 peer-reviewed articles, Scopus lists 20, Springer Link filters to 595, ACM Digital Library provides 1,368 results, and Google Scholar shows 14,800 references since 2021. These discrepancies highlight the importance of applying precise selection methods to extract the most relevant studies. To streamline the process of locating strong contributions, the AI-powered tool Elicit [20] is used. Elicit supports the literature review by offering summarised content and DOIs for suggested papers. While it helps reduce the manual workload during the initial phase, every suggested paper in this review is manually reviewed to ensure its relevance. This ensures that the final list excludes any unrelated or off-topic material. After the initial filtering, a manual review is performed to confirm the relevance and quality of each paper. Abstracts are first assessed to judge suitability. If the abstract lacks clarity or depth, a further examination of the full text is conducted. Abstracts are closely read, and when necessary, the full text is reviewed using the following exclusion and inclusion criteria: Manuscript submitted to ACM Inclusion Criteria: Studies are included if they offer meaningful theoretical or empirical insights related to NLP, LLMs, and their application in software requirements. This includes topics like specification, formal logic, verification, and formal methods. Exclusion Criteria: Papers are excluded if they show in-sufficient relevance to the intersection of NLP/LLMs and software requirements, or if their abstracts or full texts lack sufficient detail. Non-peer-reviewed materials, duplicates, and items suggested by Elicit but deemed irrelevant after manual review are also removed. # 3 FORMALISING REQUIREMENTS THROUGH LLMS The paper [18] proposes using LLMs, like GPT-3.5, to verify code by analysing requirements and explaining whether they are met. The work [17] details about nl2spec, a framework that leverages LLMs to generate formal specifications from natural language, addressing the challenge of ambiguity in system requirements. Users can iteratively refine translations, making formalization easier. The work [17] provides an open-source implementation with a web-based interface. The work [73] provides verification and refinement of natural language explanations by making LLMs and theorem provers work together. A neuro-symbolic framework i.e. Explanation-Refiner is represented. LLMs and theorem provers are integrated together to formalise explanatory sentences. The theorem prover then provides the guarantee of validated sentence explanations. Theorem prover also provides feedback for further improvements in NLI (Natural Language Inference) model. Error correction mechanisms can also be deployed by using the tool Explanation-Refiner. Consequently, it automatically enhances the quality of explanations of variable complexity. [4] outlines key research directions for the stages of software requirement engineering, conducts a SWOT analysis, and share findings from an initial evaluation. In [50], symbolic NLP and ChatGPT performance is compared while generating correct JML output against given natural language pre-conditions. In [5], domain models are generated against natural language requirements for industry-based case study. Domain model extractor is designed and applied to four industrial requirements documents. The accuracy and overall performance is reported for the designed model extractor. An automatic synthesis of software specifications is provided through LLMs in [59]. We know that software configurations give an insight of how the software will behave. While software are frequently discussed and documented in a variety of external sources, including software manuals, code comments, and online discussion forums, making it hard for system administrators to get the most optimal configurations due to lack of clarity and organisation in provided documentation. Work [59] proposed SpecSyn framework that uses an advanced language model to automatically generate software specifications from natural language text. It treats specification generation as a sequence-to-sequence learning task and outperforms previous tools by $2 1 \%$ in accuracy, extracting specifications from both single and multiple sentences. AssertLLM tool is presented in [23]. The tool generates assertions to do hardware verification from design specifications, exploiting three customised LLMs. It is done in three phases, first understanding specifications, mapping signal definitions and generating assertions. The results show that AssertLLM produced $89 \%$ correct assertions with accurate syntax and function. The work [30] reports on formal verification of NASA’s Node Control Software natural language specifications. The software is deployed at International Space Station. Errors found in the natural language requirements are reported by the authors with a commentary on lessons learnt. SpecLLM [54] explores the space of generating and reviewing VLSI design specifications with LLMs. The cumbersome task of chip architects can be improved by exploiting the power of LLMs for synthesising natural language specifications involved in chip designing. So, the utility of LLMs is explored with the two stages i.e. (1) generation of architecture specifications from scratch and from register transfer logic (RTL) code; and (2) reviewing these generated specifications. The paper [68] introduced a model-based language (Requirements Specification Language - RSL) that enhances software requirements specification by incorporating constrained natural language phrases, including verbs, adjectives, and prepositions, grouped by nouns. It also presented an advanced tooling framework that captures application logic specifications, enabling automated transformations into code, validated through a controlled experiment. The framework functionality is integrated as development platform named ReDSeeDS. The work was a part of EU project, maintained at www.redseeds.eu [68]. A similar work is reported in [31] which describes the details of ARSENAL framework and methodology designed to perform automatic requirements specification extraction from natural language. The generated specification can be verified automatically through the framework. An interesting work is presented in [51]. Business process models specifes the requirements of process-aware information systems. It includes generation of natural language from business process models. Domain experts and system analysts find it difficult to validate BPMs directly. The automation of generating a text which describes these models is done in [51]. For the translation process, BPMs are translated to RPST - a tree like structure of the process model which is then used to generate sentences first. These generated sentences are refined by adding linguistic complexity later-on. The generated natural language is found complete and more understandable. In primitive work of 1996 [69], the software requirements were expressed in a limited set of natural language. Authors [69] referred it as controlled natural language. For Controlled Language (CL) , authors used the Alvey Natural Language Toolkit (ANLT). This intermediate notation is then translated to logical expressions in order to detect and remove ambiguities in requirement specifications. In [74], the potential and power of LLMs is exploited for smart grid requirement specifications improvement. Here, the performance of GPT-4o and Claude 3.5 Sonnet is analysed through f1-scores, achieving in range of $7 9 \% - 9 4 \%$ . The paper [91] reports the translation between NL and Linear Temporal Logic (LTL) formulas through the use of LLMs. The challange of low accuracy and high cost during model training and tuning of general purpose LLMs is considered in [91]. Dynamic prompt generation and human interaction with LLMs are amalgamated to deal with the mentioned challenges. Unstructured natural language requirements are converted to NL-LTL pairs. The approach achieved up to $9 4 . 4 \%$ accuracy on publicly available datasets with 36 and 255,000 NL-LTL pairs. Dealing with a different domain domain, it improved from $2 7 \%$ to $78 \%$ through interactive prompt evolution. In [81], authors present ESBMC-AI, a framework that combined Large Language Models (LLMs) with Formal Verification to automatically detect and fix software vulnerabilities. The approach used Bounded Model Checking (BMC) to identify errors and generate counterexamples, which were then fed into LLM to repair the code, followed by re-verification with BMC. Evaluated on $5 0 { , } 0 0 0 \mathrm { C }$ programs from the FormAI dataset, ESBMC-AI effectively fixed issues like buffer overflows and pointer dereference failures with high accuracy, making it a valuable tool for software development and CI/CD integration. In [38], authors reported performance of LLMs to translate natural language into formal rules by training them on datasets for regular expressions (regex), first-order logic (FOL), and linear-time temporal logic (LTL) . The results show that the models adapt well to new terms and symbols, and outperformed existing methods in regular expression translation. In [62], SynVer is presented. SynVer is a framework to synthesise and verify C programs. Built on the Verified Software Toolchain, the tool usage was applied on benchmarks containing basic coding tasks, Separation Logic assertions, and API specifications. In [72], five different benchmark datasets are used to analyse the performance of GPT-3.5 and GPT-4, making sure the low-level software requirements meet all high-level requirements. The results reported in [72] showed that GPT-3.5, using zero-shot prompting with explanations, correctly detected full coverage in four out of five datasets and achieved $9 9 . 7 \%$ recall in spotting missing coverage from a removed low-level requirement. SAT-LLM, a unique framework to remove conflicting requirements is represented in [24]. It integrated Satisfiability Modulo Theories (SMT) solvers with LLMs. The performance of LLMs struggles while removing complex requirements conflicts. With the capability of formal reasoning, integrating SMT solvers with LLMs makes it a viable approach for the task. Experiments are performed with SAT-LLM and it performed well as compared to standalone performance of LLM i.e. ChatGPT identified $3 3 \%$ of conflicts with a Precision of 0.85, Recall of 0.31, and an F1 score of 0.46, struggling with hidden or complex conflicts. SAT-LLM performed better, identifying $8 0 \%$ of conflicts with a Precision of 1.00, Recall of 0.83, and an F1 score of 0.91. [21] used input of error messages, variable names, procedure documentation and user questions. The suitablility of NLP techniques is discussed by going through the literature for each step of software development life cycle. The authors of [21] indicated NLP a good candidate for software development. For instance, they discussed available literature for generating assertions by synthesising sentences in testing phase. The paper [63] introduced Req2Spec, an NLP-based tool that analyses natural language requirements to create formal specifications for HANFOR, a large-scale requirements and test generation tool. Tested on 222 automotive software requirements at BOSCH, it correctly formalized $71 \%$ of them. A primitive work in the domain is about RML [36]. RML bundled with features of writing requirements, which are based on conceptual model, having attribute of organisation and abstraction, maintaining precision, consistency and clarity. Well-defined logic and semantics of RML are presented in [36]. The work [22] evaluated GPT-4o’s ability to generate specifications for C programs that can be verified using VeriFast, a static verifier based on separation logic. Their experiments, which use different user inputs and prompting techniques, show that while GPT-4o’s specifications maintain functional behaviour, they often fail verification and include redundancies when verifiable. The primitive work [66] described automatic translation from natural language sentences to temporal logic, in order to deploy formal verification of the requirements. Paper [89] introduced a methodology Lemur that integrated large language models with automated reasoners for program verification, formally defining transition rules, proving their soundness, and demonstrating practical improvements on synthetic and competition benchmarks. Performance of Lemur is compared with Code2Inv, ESBMC, and UAutomizer while using SV-COMP benchmarks. Code2Inv is a comprehensive work of 2020 [80], contributing towards learning-based end-to-end program verification. Code2Inv utilised reinforcement learning to train an invariant synthesizer to propose loop invariants. [65] is a systematic review of the domain i.e. translating between natural language software requirements to formal ones, based on the databases of Scopus, ACM, IEEE Xplore and Clarivate. [67] proposed a pipeline integrating Large Language Models (LLMs) to automatically refine and decompose safety requirements for autonomous driving, addressing frequent updates in the automotive domain. The work evaluated LLMs’ ability to support Hazard Analysis and Risk Assessment (HARA) through iterative design science and expert assessments, ultimately implementing the prototype in an industrial setting. The responsible software development team evaluated its efficiency. [92] presented a framework to ensure consistency between different representations of specifications, maintaining semantic alignment between oral and formal descriptions while ensuring implementation potential through synthesis. It included time extraction, input-output partitioning, and semantic reasoning beyond syntactic parsing. The framework of [92] performed well on various test examples. The work [64] reports available NLP tools for Topological Functioning Modelling (TFM). Among six selected LLM pipelines, best performance tools found were the Stanford CoreNLP toolkit, FreeLing, and NLTK toolkit. In [60], power of LLMs is utilised for generating Dafny tasks. 178 problems are selected from MBPP benchmark. Three types of prompts are used: (1) Context less prompts, (2) Signature prompt comprised of method signature and test cases, and (3) Context of Thought (CoT) prompt based on decomposition of problems into multiple steps and inclusion of retrieval augmentation generated problems and solutions. GPT-4 outperformed Manuscript submitted to ACM PaLM-2 on the evaluated tasks and achieved the best results using the retrieval-augmented CoT prompt. The technique described in [60] provided 153 verified Dafny solutions to MBPP problems, including 103 synthesized by GPT-4 and 50 written manually. The work [93] introduced LeanDojo, an open-source toolkit that enables programmatic interaction with Lean theorem prover, providing annotated proof data to support premise selection in theorem proving. Using LeanDojo’s extracted data, [93] developed ReProver, a retrieval-augmented LLM-based prover that effectively selects premises, significantly improving theorem proving efficiency while requiring only one GPU week of training. Furthermore, a new benchmark with 98,734 theorems and proofs from Lean’s math library constructed. It is designed to test generalization to novel premises, and demonstrate ReProver’s superiority over non-retrieval baselines and GPT-4. Thor [47] is a framework developed to integrate language models with theorem provers. The task of finding relevant premises while proving conjecture is the crucial task in implementing automatic theorem provers. Thor [47] is presented to handle this very task. The class methods for selecting relevant premises are named Hammers. To test the performance of the framework, the PISA dataset is used. It improved the accuracy from $3 9 \%$ to $5 7 \%$ , while $8 . 2 \%$ of the proofs could not be solved neither from the language models nor from the theorem provers. Thor also outperformed the previous works while using MiniF2F dataset. The paper [35] explores the space of integrating Co-pilot (github) with formal methods. It introduces key formal languages i.e. Dafny, Ada/SPARK, Frama-C, and KeY alongwith the interactive theorem provers (Coq, Isabelle / HOL and Lean). The integration of Copilot and formal methods is proposed through development of IDE containing language servers. The examples of such existing IDEs are VSCode and Eclipse where multiple programming languages support is available in one IDE through Language Server Protocol (LSP). A very comprehensive study on the state of the art of formal specification and verification on autonomous robotic systems is [56]. The authors did literature review on cyber-physical systems, omitting pure software-based systems. The survey covered human-controlled and autonomous systems. These were either remotely operated or self-governing. Formal properties like safety, security and reliability were considered. Mechanical and physical considerations are excluded from the scope of the survey [56]. [14] is a comprehensive work on NLP verification, where existing verification approaches are exhaustively analysed in the paper. A structured NLP Verification Pipeline has been established compromising six critical components: data selection, generation of perturbations, choice of embedding functions, definition of subspaces, robust training, and verification through existing algorithms. To validate this pipeline, ANTONIO tool has been implemented. It enabled modular experimentation with various pipeline components. The key contribution of [14] is the identification of gaps in existing approaches and the proposal of novel solutions to improve the robustness and reliability of NLP verification pipelines. NLP verification results have been reported through additional criteria. Standard verifiability metrics has been extended, comparing geometric with sematic subspaces. Semantic perturbations are employed while conducting experiments. In [14], the importance of reporting volumes, generalisability, and embedding error of verified subspaces is emphasised. The reason behind is the great impact of these factors on reliability and interpretability of verification results. The paper [14] indicates the future expansion by evaluating model robustness against adversarial perturbations and dataset variations. Granberry et al. [34] explored how combining large language models (LLMs) with symbolic analysis can help generate specifications for C programs. They enhanced LLM prompts using outputs from PathCrawler and EVA to produce ACSL annotations. Their findings showed that PathCrawler generated context-aware annotations, while EVA contributed to reducing runtime errors. The purpose of Dafny is to automate proofs by outsourcing them to an SMT solver. The SMT solver needs assertions while automating the process. [61] presented a framework named Laurel to generate Dafny assertions using LLMs. Mugnier et al. [61] designed two domain-specific prompting techniques. First one locates the position in code where assertion is missing. This is done through analysis of the verifier’s error message. At the particular location with missing assertion, a placeholder is inserted. Second technique involves provision of example assertions from codebase. Laurel was able to generate over $5 0 \%$ of the required helper assertions, making it a viable approach to deploy, while automating program verification process. The work [57] represents a novel framework named SpecGen to generate specifications through LLMs. Two phases are applied. First phase is about having prompts in conversational style. Second phase is deployed where correct specifications are not generated. Here, four mutation operators are applied to ensure the correctness of the generated specifications. Two benchmarks i.e. SV-COMP and SpecGen are used. Verifiable specifications are generated successfully for 279 out of 384 programs, making [57] a viable approach. The work [13] deals with the challenges involved in NL2SQL transformation, being widely deployed in Business Intelligence (BI) applications. Bora et al. [13] developed a new benchmark focused on typical NL questions in industrial BI scenarios. Authors added question categories in the developed benchmark. Furthermore, two new semantic similarity evaluation metrics are represented in [13], increasing NL2SQL transformation capabilities. # 4 TRACEABILITY OF SOFTWARE REQUIREMENTS A dynamic requirements traceability model is proposed in [76], enabling improved software quality through verification and validation of functional requirements. The model ensured software scalability as well, dealing both small and large-sized projects. A novel model for traceability and verification in the early development phase is presented in [77]. Adaptability to requirement changes is improvised in the model and its impact is assessed. [29] provided a comprehensive review of software requirements traceability, covering key elements, challenges, and techniques. The study classified traceability approaches and highlighted prospects for future research. The empirical analysis of requirements completeness is performed in [75]. By enforcing completeness in requirements ensured reduced defect rates. Here, traceability metrics is produced and regression analysis is performed to quantify the software quality. [40] introduced the RETRO tool for automating requirements traceability matrix (RTM) generation. The study showed that RETRO significantly improved accuracy and efficiency compared to manual tracing methods. [84] proposed a hybrid approach combining VSM and BTM-GA to enhance traceability link generation. The method outperformed traditional IR techniques, improving recall and precision, particularly in agile development contexts. Trustrace [2] is a trust-based traceability recovery approach. It leverages mined data from software repositories. In comparison to standard IR techniques, Trustrace improved precision and recall in traceability link retrieval. [37] applied deep learning for traceability, incorporating semantic understanding and domain knowledge. This is done using BI-GRU. It outperformed traditional approaches of VSM and LSI in terms of accuracy and effectiveness. A topology-based model-driven approach for backward requirements traceability is presented in [6], formalising specifications and establishing trace links between real-world functional units and software artifacts. In order to manage software evolution process, [16] proposed an event-based traceability mechanism. The work reported performance improvement in change management by linking artifacts through an event service. It maintained consistency being deployed in distributed development environment. Tracebok [19] is a body of knowledge on software requirements traceability. The framework categorised traceability approaches and provided guidance for implementing traceability in software projects. [58] reviewed visualisation tools and techniques for software requirements traceability. The challenges emphasised were scalability and visual cluttering, providing insights of improved traceability visualisation. In [78], Z notation is used to represent the SRS and design artifacts in order to establish the traceability of functional requirements. The SRS document originally used UML diagrams for requirement analysis and software design. Z notation established trace paths based on defined rules. A prototype framework based on XML is developed to trace the requirements in software design. The designed framework is named as RVVF (Requirement Verification and Validation Framework). [15] established the concept that terminology extraction can improve traceability from formal models to textual requirements. Cerbah et al. [15] presented a fully implemented system that analysed text corpora to generate hierarchically organised terminological resources and formal class hierarchies. These resources and formal class hierarchies are then communicated to the Troeps knowledge server via an XML stream. The system also enabled user validation of terminology elements and supports bidirectional linkage between the model and source documents. Survey paper [82] examined requirements traceability, including definitions, challenges, and tools. [71] represented a tool named TOOR - traceability of object-oriented requirements. The tool is based on the principles of hyper-programming and hyper-requirements. Goknil et al. [33] addressed requirements and their relationships from a traceability perspective. It introduced a metamodel for requirements that included formally defined relation types. The relations are formalised using first-order logic to enable inference and consistency checking. A supporting tool demonstrated the approach on a real-world requirements document, helping to uncover hidden dependencies and detect contradictions. # 5 FORMAL METHODS AND TESTING A detailed examination of the interaction between formal specification techniques and software testing is presented in the 2009 ACM survey [41]. The paper argues that formal descriptions of systems can significantly support the testing process. It introduces a classification of formal specification languages into several distinct categories. These include model-based methods such as Z and VDM, languages grounded in finite-state representations like FSMs and Statecharts, algebraic approaches such as OBJ, process algebraic notations like CSP and CCS, and hybrid methods that integrate both continuous and discrete system behaviours—although the latter are treated as outside the scope of the survey. In practical applications, formal methods can contribute to testing by either enabling the automated generation of test cases or offering mechanisms to define precise test oracles. Executable specifications, in particular, may be subjected to model-checking techniques to assess conformance with desired properties. The theoretical underpinnings that link formal specifications with testing processes are also explored, with [28] offering foundational contributions—especially in identifying test selection assumptions that underpin the effectiveness of test generation. Each category of formalism is associated with its own techniques for producing relevant test artefacts. # 5.1 Model-Based In this category, testing often involves partitioning the input domain using assumptions about uniform system behaviour within each partition. Logical expressions—frequently cast in disjunctive normal form—are used to define these partitions and guide automation. Further approaches include domain analysis techniques that focus on identifying critical boundaries for functions and operators. Additionally, refinement-based testing and mutation techniques are discussed as part of this landscape. Although highly suitable for defining test oracles, model-based methods are often limited in their ability to automatically generate tests without the assistance of theorem-proving tools. Manuscript submitted to ACM # 5.2 Finite State-Based Testing approaches that rely on finite-state representations frequently define correctness in terms of language-based conformance between the implementation and its specification. A common strategy involves using a fault model to constrain the number of system states considered. Test generation techniques initially focus on deterministic finite-state machines, before expanding to address partial and non-deterministic forms. While these methods offer structured ways to derive test suites, the primary challenge lies in managing the combinatorial growth in possible state sequences. # 5.3 Process Algebras Systems described using process algebra are typically interpreted through labelled transition systems (LTS), which can be infinite. To address this, state space reduction techniques are often employed. In many respects, the methods and challenges of testing in this domain resemble those found in finite-state approaches, particularly with regard to scalability and complexity management. # 5.4 Algebraic Algebraic specifications are especially well-suited to object-oriented software. Test cases in this setting may be derived either from the syntactic structure of operations or from the logical axioms that define their intended behaviours. This method provides a strong formal basis for validation, although transforming abstract axioms into executable test procedures requires further elaboration and interpretation. The survey places particular emphasis on the value of automated reasoning tools in supporting test activities. Model checkers are highlighted as being capable of producing counterexamples when temporal properties are not satisfied, which can subsequently be re-purposed as test cases. Similarly, properties defined in temporal logic can guide the construction of structured test sequences. # 5.5 Formal Tools Isabelle/HOL is a powerful theorem prover based on higher-order logic. It features robust automation, a large repository of verified theorems, and tools for interactive proof construction. The system allows formal reasoning about complex mathematical properties and software systems, offering both depth and flexibility in its approach. Frama-C is a modular platform designed for the formal analysis of C code. It supports ACSL (ANSI-C Specification Language) for specifying expected behaviour and includes plug-ins for static analysis, verification, and integration with theorem provers. This makes it highly applicable to domains that demand rigorous validation of software correctness. There has been significant progress in the use of probabilistic techniques for formal verification [25, 45, 49]. Probabilistic model-checking focuses on evaluating how likely a system is to meet certain criteria, rather than delivering absolute verdicts. A comprehensive overview of developments in this area is provided in [1], particularly in the field of statistical model-checking, which offers scalable solutions for analysing stochastic systems in practical contexts. Promela is a modelling language aimed at the representation of concurrent systems. It is paired with the SPIN model checker, which is widely used in both academic and industrial settings. A supporting tool, Modex, can automatically extract Promela models from C code. SPIN evaluates properties defined in linear-time temporal logic (LTL), and when verification fails, the counterexamples it provides can be used to create test cases. Its command-line interface makes it suitable for automation and integration into larger tool-chains. TLA $. +$ offers a mathematical framework for specifying systems, particularly those involving concurrency. It emphasises logical precision using simple mathematical constructs. PlusCal provides a more familiar, algorithm-like syntax that compiles directly into $\mathrm { T L A } +$ , enabling a smoother transition for developers accustomed to pseudocode-style representations. # 6 FORMAL PROOFS IN UTP AND THEORY OF INSTITUTION [88] provided a tutorial introduction to Hoare and He’s Unifying Theories of Programming (UTP) and the concept of designs. It explained how alphabetised relational calculus could describe various programming constructs, illustrating their application to imperative programming theories like Hoare logic and the refinement calculus. [32] introduced the concept of institutions as a formal framework to model logical systems. It presented several foundational results, such as gluing signatures, preserving theory structuring, and extending institutions to include constraints for abstract data types, contributing significantly to the theory of specifications and programming languages. [87] discussed Circus, a concurrent language that integrated imperative programming, CSP, and Z through the unifying theories of programming. It provided a formalisation of Circus in the UTP framework, highlighting its use for refining concurrent systems. [39] explored integrating runtime verification into an automated UAS Traffic Management system. It demonstrated how runtime verification could ensure system safety by applying formal requirements to various subsystems, validated through real-world flight simulations. [12] presented formal verification applied to the RTEMS real-time operating system, using Promela models and the SPIN model-checker to verify multi-core processor qualification for spaceflight. It discussed linking UTP semantics to enhance the test generation process, with a focus on future research directions. [26] introduced Isabelle/UTP, an implementation of Hoare and He’s UTP for unifying formal semantics. It enabled mechanising computational theories across paradigms and provided proof tools for Hoare logic, refinement calculus, and other computational paradigms, supporting the development of automated verification tools. [46] discussed the use of UML and the UML Testing Profile (UTP) in model-based testing for resource-constrained real-time embedded systems. It addressed the generation of test artefacts from UTP standards and presented a detailed algorithm for creating test cases for such systems. [27] presented a Java model of the priority inheritance protocol in the RTEMS real-time operating system, verified using Java Pathfinder. It detected and fixed known bugs in the RTEMS implementation, ensuring the absence of issues like data races, deadlocks, and priority inversions. [83] proposed an iterative prompting framework for pre-trained language models to handle multi-step reasoning tasks. It introduced a context-aware prompter that dynamically synthesised prompts based on the current step’s context, improving the model’s reasoning capabilities in complex tasks. # 7 FUTURE DIRECTIONS In this section, we outline prospective directions informed by the literature review. Much of the literature in this review employed queries containing a problem description and some instructions to achieve a desired outcome. Such querying of LLMs without training or examples of the current task is typically referred as zero-shot prompting and shows excellent performance on many tasks [48]. Surprisingly, they also showed that the performance of LLM on some challenging problems can be improved by encouraging the LLM to reason using intermediate steps through a simple addition to problem prompts (“Lets think step by step”). Manuscript submitted to ACM # 7.1 Advanced Prompt Engineering Beyond this approach is one-shot prompting that includes an example of a solved problem to guide the LLM into generating the desired output [55]. This can be extended to few-shot prompting where a number of differing examples guide the LLM. But improved results are not assured as some studies e.g. [95] show that zero-shot can outperform the few-shot case [96]. [42] reviewed the evolution of prompt engineering in LLMs, including discussions on self-consistency and multimodal prompt learning. It also reviewed the literature related to adversarial attacks and evaluation strategies for ensuring robust AI interactions. Chain of Thought (CoT) [85] prompting involves a sequence of prompts producing intermediate results that are generated by the LLM and used to drive subsequent prompting interactions. These orchestrated interactions can improve LLM performance on tasks requiring logic, calculation and decision-making in areas like math, common sense reasoning, and symbolic manipulation. CoT requires the LLM to articulate the distinct steps of its reasoning, by subdividing larger tasks into multi-step reasoning stages, acting as a precursor for subsequent stages. But the CoT approach may require careful analysis when used with larger LLM offering long input contexts. This is because of the lost-in-the-middle problem where LLM show a U-shaped attention bias [42] and can fail to attend to information in the middle of the context window. PromptCoT [94] enhanced the quality of solutions for diffusion-based generative models by employing the CoT approach. The computational cost is minimised through adapter-based fine-tuning. Prompt design is explored in detail in [3]. It discussed Chain-of-Thought and Reflection techniques, along with best practices for structuring prompts and building LLM-based agents. Besta et al. [10] introduced the concept of reasoning topologies, examining how structures such as Chains, Trees, and Graphs of Thought improve LLM reasoning. They also proposed a taxonomy of structured reasoning techniques, highlighting their influence on both performance and efficiency. Structured Chain-of-Thought (SCoT) prompting was proposed by [53] to enhance code generation by incorporating structured programming principles. This approach significantly improved the accuracy and robustness of LLM-based code synthesis compared to standard CoT methods. Building on the theme of automation, [79] introduced Automate-CoT, a technique for automatically generating and selecting rational chains for CoT prompting. By minimising dependence on human annotations, it enabled more flexible adaptation of CoT strategies across diverse reasoning tasks. Complementing these efforts, [86] presented a prompt pattern catalog that offered reusable design patterns to optimise LLM interactions, thereby refining prompt engineering practices for a wide range of applications. Additionally, [90] proposed Reprompting (Gibbs sampling-based algorithm) for discovering optimal CoT prompts. The proposed prompting technique consistently outperformed human-crafted alternatives and demonstrated high adaptability across various reasoning benchmarks. Retrieval Augmented Generation (RAG) [52] supplements problem information with specifically retrieved information and is often used in knowledge intensive tasks. This helps ensure the LLM attends specifically to the retrieved information when addressing the users prompt. LLM model selection (chat vs reasoning) and fine tuning such as with LoRA [43] remain among a growing number of possibilities for exploration. Based on the literature survey conducted, we sketch one line research agenda: in VERIFAI, we aim to improve the techniques that bridge the gap between informal natural language description and rigorous formal specifications, through refinement of prompt engineering, the incorporation of chain-of-thought reasoning and the development of hybrid neuro-symbolic approaches.
This draft is a working document, having a summary of nighty-four (94) papers with additional sections on Traceability of Software Requirements (Section 4), Formal Methods and Its Tools (Section 5), Unifying Theories of Programming (UTP) and Theory of Institutions (Section 6). Please refer to abstract of [7,8]. Key difference of this draft from our recently anticipated ones with similar titles, i.e. AACS 2025 [7] and SAIV 2025 [8] is: [7] is a two page submission to ADAPT Annual Conference, Ireland. Submitted on 18th of March, 2025, it went through the light-weight blind review and accepted for poster presentation. Conference was held on 15th of May, 2025. [8] is a nine page paper with additional nine pages of references and summary tables, submitted to Symposium on AI Verification (SAIV 2025) on 24th of April, 2025. It went through rigorous review process. The uploaded version on arXiv.org [8] is the improved one of the submission, after addressing the specific suggestions to improve the paper.
[ "cs.SE", "cs.AI", "D.2.1; D.2.4; D.2.10; F.4.1; F.4.3" ]
# Introduction to Food Processing As concern grows over the health impacts of processed foods1–3, researchers, policymakers, and consumers alike are asking a critical question: What does it really mean for a food to be processed? The answer is far from simple. While we increasingly rely on epidemiological data to draw connections between diet and disease, the upstream task of defining and classifying “processed food” remains complex and contested. Broadly, food processing refers to any alteration made to a raw agricultural product that affects its form, flavor, shelf life, or safety. This definition is recognized by major organizations such as the European Food Information Council (EUFIC), the United States Department of Agriculture (USDA), and the Food and Agricultural Organization (FAO) of the United Nations $\mathrm { ( U N ) } ^ { 4 - 6 }$ . By this standard, nearly all food we consume is processed to some degree, from the boiling of vegetables at home to the industrial milling of grains into flour. Yet not all processing is equal. Cooking a tomato is not the same as engineering a shelf-stable, hyper-palatable snack. The spectrum of food transformation spans simple acts like chopping and heating to complex industrial formulations involving additives, emulsifiers, and extrusion technologies. Understanding which processes carry potential health risks and under which circumstances is central to modern nutrition science. To appreciate the evolving relationship between food processing and human health, it is first necessary to examine how food preparation practices have developed over time and how technological and societal shifts have redefined what we eat. # Historical Trends and Drivers Food processing is one of humanity’s oldest innovations. From roasting meat over fire to fermenting vegetables, early food transformation techniques were born out of necessity—for preservation, safety, and seasonal scarcity. For millennia, the methods and intensity of food processing were largely shaped by geography, class, and labor. In ancient societies, elaborate techniques like milling or breadmaking were often reserved for elites; white bread, for instance, was a status symbol in the Roman Empire4. Preservation practices like salting, fermenting, and cooling date back at least 13,000 years and helped societies endure poor harvests and long winters7,8. These traditional methods persisted until the Industrial Revolution, which radically transformed both the purpose and scale of food processing. In the 19th century, inventions such as canning, pasteurization, oil hydrogenation, and refrigeration laid the groundwork for what would become the global, industrialized food system9. As factory work drew people into cities, diets shifted alongside infrastructure. Trains and steamboats enabled rapid distribution of food across vast distances, while electricity and crude oil unlocked faster production and storage. The invention of Freon in the mid-20th century made home refrigeration affordable and widespread, paving the way for ready-to-eat (RTE) meals that required little or no preparation. In their early forms, many of these meals were dry, shelf-stable products designed to last at room temperature. From the debut of RTE breakfast foods like Kellogg’s Corn Flakes in 1906 to the success of Swanson’s TV dinners in the $1 9 5 0 \mathrm { s } ^ { 1 0 }$ , these innovations marked the beginning of a new era of convenience foods, tailored to demanding workdays and fast-paced living1 But convenience was only part of the story. As competition grew in the late 20th century, food manufacturers began to study consumer taste preferences scientifically12. The result was a new design principle: hyper-palatability, i.e., foods engineered to heighten pleasurable qualities like sweetness, saltiness, and richness by combining fat, salt, and sugar/carbohydrates at moderate to high levels13. These combinations, rarely found in natural foods, were crafted to maximize appeal and encourage repeat consumption14. Today, what we broadly define as highly processed foods (HPFs) — multi-ingredient, industrially formulated products that are typically ready to heat or eat and include components uncommon in traditional cooking — now dominate modern diets15. Their widespread availability, affordability, and engineered appeal have fundamentally reshaped what, how, and why we eat. # Current Frameworks for Evaluating and Quantifying Food Processing Early attempts16–18 to categorize food processing followed a straightforward model broadly capturing how many steps separate a raw agricultural product from its final form: Primary processing includes minimal interventions like cleaning, cutting, and refrigeration — measures that retain a food’s original structure and composition. Secondary processing involves turning raw ingredients into more complex products through cooking, fermenting, or mixing, such as home-cooked or restaurant meals. Tertiary processing often refers to the industrial assembly of RTE meals using pre-cooked ingredients, additives, and packaging technologies. Additives, which are compounds or components added to food, serve various functions, such as extending shelf life, enhancing flavor, altering appearance, or restoring nutrients lost during processing4,5. For example, vitamin B1 may be reintroduced into refined grains, while paprika extract might be used to maintain visual appeal. More broadly, tertiary processing technologies are designed to: 1) preserve or delay food decay, 2) maintain or enhance quality and sensory attributes, 3) meet specific nutritional needs, and 4) reduce waste throughout the food supply chain19. These techniques require specialized equipment and resources, and are typically applied at the commercial scale, making them a key distinction between home-prepared meals and their industrially produced counterparts6. While this tiered system offers an intuitive framework, it quickly falls short under the complexity of today’s global food supply. In practice, few foods reach our tables untouched by post-harvest modifications. For example, milk is pasteurized and fortified, grains are milled, and oils are refined and preserved. Even fresh fruits may be coated with wax to extend shelf life20. These processes, though common, blur the distinctions between processing categories. This blurring is particularly evident in secondary processing, where home-cooked meals often begin with ingredients that have already undergone extensive transformation. To navigate this ambiguity, nutrition epidemiologists and food scientists have developed several classification systems and ontologies that describe foods according to their degree of processing. These systems were designed to translate the complexity of the modern food system into structured, actionable categories, particularly to support investigations into the health impacts of HPFs. In the following section, we examine some of the most widely used frameworks. # NOVA Classification The NOVA classification system, first introduced in 2009 by Brazilian researchers led by Carlos Monteiro, organizes foods based on the extent and purpose of processing rather than on where or by whom the processing occurs21. Designed to support public health research and dietary surveillance, NOVA originally included three categories: minimally processed foods (MPFs), processed culinary ingredients (PCIs), and ultra-processed foods (UPFs). MPFs are foods that undergo minimal changes and retain their nutritional structure. They may be cleaned, portioned, frozen, boiled, dried, pasteurized, or packaged, but not substantially transformed. PCIs are substances derived from natural foods and intended for use in cooking rather than direct consumption. Examples include sugar, oil, flour, and salt, produced through processes such as milling, refining, and extraction. UPFs are industrially formulated foods primarily composed of PCIs and some MPFs. They are often pre-cooked and contain additives that enhance taste, appearance, and shelf life. These include RTE meals and branded, internationally distributed commercial food products, which require industrial equipment for production, such as hydrogenation and fortification. According to NOVA’s developers, this class contributes to unhealthy eating patterns and increased risk of chronic disease. By 2016, the NOVA classification system included a fourth group of processed foods, creating NOVA as we know it today (Table $1 ) ^ { 2 2 }$ . This group consists of foods made by combining MPFs with few PCIs, typically containing a few ingredients aimed at improving shelf life or altering sensory qualities. Some processes characteristic of processed foods include salting, smoking, canning, bottling, and non-alcoholic fermentation. While the system offers an intuitive framework, its application is not without challenges. Commercial manufacturers rarely disclose their full production methods, making it difficult to assess the number and type of processing steps. As a workaround, analysts may employ deformulation, where ingredient lists are reverse-engineered to infer processing steps. This approach is time-consuming and imprecise due to the lack of exact ingredient amounts, leading to significant variability and self-regulation within the food industry23,24. Foods also consist of multiple components, each with different processing histories, further complicating how to weigh the ingredient processes for the overall food item. Additives pose additional challenges. Indeed, chemical compounds like Allura Red AC (E129)25 are clearly classified as artificial additives, but natural coloring agents like paprika extract raise questions about whether all additives used in industrial settings should be considered equal. Similarly, the addition of nutrients such as vitamin D in milk or vitamin B1 in flour complicates the assumption that all additives are markers of processing detrimental to human health. Table 1: The four classes of the NOVA food classification system. Examples of common processes and foods found in each class are provided. Although many ambiguities remain, as the next section will explore, NOVA has become a widely adopted tool in nutritional epidemiology and public health for evaluating how food processing affects health outcomes. # Public Health Implications of NOVA NOVA has played a central role in investigating the health risks associated with UPF consumption. Between 2015 and 2019, $9 5 \%$ of studies on this topic relied on NOVA to categorize foods26. These studies have linked UPF consumption with increased risk of obesity27, over-eating1, type 2 diabetes28, cardiovascular disease (CVD)29, and other non-communicable conditions30–35. Despite this extensive body of epidemiological evidence, the underlying biological mechanisms driving these associations remain only partially understood. One proposed pathway involves the hyper-palatability of UPFs, which are often energy-dense and rich in salt, sugar, and fat. These sensory characteristics are believed to override satiety signals, promoting excess intake, even though the exact reward-related neural pathways remain an area of ongoing research36. This focus draws attention to the nutritional quality of UPFs, which is notably not a formal criterion within the NOVA classification system. Hyper-palatability, however, is not simply a function of elevated levels of fat, sugar, and salt, nor can it be fully captured by statistical models that combine these nutrients linearly — a strategy researchers have attempted with mixed success1,37,38. Hyper-palatability also involves physical transformations to food that often alter the natural food matrix39,40. This matrix represents the complex physical and chemical organization of nutrients and bioactive compounds in whole foods and plays a crucial role in digestion, absorption, and satiety. UPFs frequently exhibit textures that promote rapid consumption, such as crunchiness or softness, low chew resistance, and melt-in-themouth consistency. These altered food matrices may compromise nutrient bioavailability, postprandial glycemic responses, and satiety levels41–44. The structural changes in the food matrix often correlate with the inclusion of industrial additives that further modulate how UPFs interact with our body. Additives such as non-nutritive sweeteners and emulsifiers have been linked to microbiota disruption, which in turn can trigger intestinal inflammation, impair glycemic control, and contribute to long-term metabolic dysfunction45–47. Artificial food dyes, though added purely for visual appeal, have also been shown in preclinical models to compromise gut barrier integrity and influence inflammatory responses48,49. These compositional and structural features of UPFs are compounded by a third, less visible layer of concern: chemical contaminants that are not disclosed on ingredient lists. UPFs are frequently exposed to neoformed compounds produced during high-temperature processing methods such as baking, frying, and roasting50. In addition, industrial chemicals like phthalates and bisphenols can leach into food from packaging materials and processing equipment51,52. Though not intentionally added as ingredients, these substances have been detected across a wide range of packaged foods and are increasingly recognized as contributors to metabolic dysfunction, reproductive and developmental abnormalities, and hormone-sensitive cancers53–56. These multifactorial and often invisible risks challenge conventional models of dietary assessment and raise critical questions about the long-term safety of industrialized food systems. # Scientific and Policy Critiques The NOVA system has profoundly reshaped how we study and discuss food processing. It has underpinned a new wave of epidemiological research, introduced the term “ultra-processed food” into policy discourse, and raised public awareness about the links between modern diets and longterm health. Despite its widespread adoption and paradigm-shifting influence, NOVA remains a qualitative and descriptive tool, which has led to inconsistencies and ambiguity across studies. Foods are often classified differently depending on the assessor, as interpretations vary and NOVA’s definitions have evolved, changing eight times between 2009 and $2 0 1 7 ^ { 5 7 }$ . The reliance on subjective, laborintensive assessments based on incomplete and heterogeneous data across studies and countries results in poor inter-rater reliability and limited reproducibility58,59. Studies have shown that trained experts frequently disagree on whether certain items should be considered ultra-processed, even when detailed ingredient lists are available. This subjectivity undermines both the repeatability and the scientific robustness of the classification. NOVA has also faced critique from food scientists and industry experts who argue that it oversimplifies processing and fails to account for differences in nutritional quality and health risk across UPF subgroups60. For instance, nutrient-fortified whole-grain cereals and baked products, fermented foods like many yogurts, and plant-based alternatives are all classified as equally ultraprocessed, despite evidence suggesting they may offer health benefits, including reduced cardiovascular $\mathrm { r i s k } ^ { 2 , 6 1 }$ . Furthermore, NOVA does not differentiate between different types of processing, such as pasteurization or fermentation, which can preserve or enhance nutritional value, compared to other industrial techniques that may reduce nutrient density. This coarsegrained approach groups nutritionally diverse items under a single label, producing broad estimates of UPF prevalence — such as $60 \%$ globally62, $70 \%$ in the U.S.63 and Greece64, and $80 \%$ in South Africa65 — that lack the specificity needed to support targeted and actionable public health interventions. In some cases, the system’s application has led to contradictory outcomes, where subcategories of foods included in the sustainable and traditional Mediterranean diet have been classified as $5 8 . 7 \%$ or $4 1 . 0 \%$ ultra-processed, respectively64. These examples underscore the need to refine and disaggregate the UPF category for more nuanced health assessment. These limitations have constrained both the scope and precision of research into the health effects of UPFs, weakening public trust and limiting their integration into formal dietary guidelines6. One major challenge is the lack of a clear mechanistic understanding of how UPFs cause harm. Given the estimated high prevalence in global food supply, a blanket recommendation to avoid all UPFs risks oversimplifying the evidence. Such guidance may also exacerbate social and economic disparities, particularly for lower-income populations who often depend on affordable, shelf-stable food products66. Instead, there is a need for nuanced, mechanistically grounded guidelines that distinguish among types of UPFs based on both health impacts and socioeconomic realities. While observational studies consistently associate UPF consumption with adverse outcomes, the underlying causal mechanisms remain uncertain. Most evidence comes from observational designs, which are susceptible to confounding factors, including socioeconomic status and broader lifestyle variables. Only a few short-term randomized controlled trials exist, and these often struggle to isolate the effects of processing from other variables like energy density and nutrient composition37. Large-scale, long-term intervention trials, which would provide more definitive evidence, are currently lacking67,68. Although several interventional studies are underway, they primarily focus on intermediate outcomes and are not designed to assess long-term endpoints, such as chronic disease incidence or mortality69. Importantly, many of these limitations are not unique to NOVA. Other classification systems also suffer from low reproducibility and high subjectivity, stemming from reliance on expert judgment in the absence of standardized, structured data. This has led to growing calls within the scientific community for a more objective framework grounded in measurable biological mechanisms rather than variable interpretations15. Among the potential starting points, the nutritional profile of foods remains the only dimension that is consistently regulated and reported worldwide, and thus may serve as a practical foundation for systematizing food classification. Yet, without standardized, open-access, high-resolution data in nutrition science, interpretive disagreements will persist, with cascading effects on epidemiological and clinical findings. While overcoming the challenges of conducting large-scale clinical trials in nutrition remains difficult69, progress can be made by improving the breadth and quality of underlying data. Strengthening the data foundations of future classification systems will better support experimental research and clinical studies, advancing nutrition science toward a more accurate, evidence-based, and data-driven discipline70. # Other Food Processing Classification Systems The launch of NOVA sparked greater interest in how food processing affects human health. As more evidence emerged about the effects of UPFs, researchers became increasingly eager to explore their components to uncover the biological mechanisms influencing health outcomes. This curiosity led to the development of various food processing classification systems intended to encompass the complexities of the global food supply. Consequently, while many classification systems exist, only a few, such as NOVA, are widely recognized in the public health community. Reflecting public interest in food processing's effects, these systems are typically tailored for either consumers or researchers. Consumer-focused food classification systems emphasize simplicity and comprehension, ensuring they are quick and easy to use. These systems produce straightforward labels that help consumers make informed choices and their adoption is generally driven by two main approaches: 1) frontof-pack (FOP) labeling, such as Nutri-Score71 and the Health Star Rating System72, and 2) mobile applications like Yuka73 and $\mathrm { G o C o C o ^ { 7 4 } }$ . FOP labels usually require governmental backing for widespread adoption, making them mandatory for food manufacturers. This facilitates consumer interaction since every grocery store product features these labels. Meanwhile, mobile applications offer barcode scanners that enable consumers to quickly access a food product’s score online. When designing food classification systems for research purposes, the primary goal is to enhance frameworks like NOVA to better clarify the relationship between food processing and noncommunicable diseases15,75. These systems can be developed retrospectively, based on observed health outcomes in large cohorts, such as those from the European Prospective Investigation into Cancer and Nutrition (EPIC)76, or by integrating additional variables into the classification process. For example, some systems go beyond additive content to account for overall ingredient composition and differentiate between beneficial and harmful types of processing. Notable efforts in this direction include studies from the University of North Carolina $\mathrm { ( U N C ) } ^ { 7 7 }$ and the Système d’Information et de Gestion des Aliments (SIGA)78. Overall, research-oriented classifications aim to generate more refined groupings of food products to help uncover the mechanisms linking diet to disease. Next, we examine two widely used systems in public health, Nutri-Score and SIGA, in greater detail. # Nutri-Score As an example of consumer-oriented labeling systems, Nutri-Score71 represents the most widely adopted front-of-pack (FOP) scheme currently used worldwide. Developed by French public health researchers, it is designed to offer a simplified, at-a-glance assessment of a product’s nutritional quality. Nutri-Score has been formally adopted by seven European countries—France, Spain, Belgium, Switzerland, Germany, Luxembourg, and the Netherlands—and is endorsed by public health authorities for its effectiveness in guiding consumers toward healthier choices79,80. Its widespread use has prompted many transnational food manufacturers to include Nutri-Score labels across products sold in the European Union. Nutri-Score assigns foods a rating from A (dark green) for the most favorable options to E (dark orange) for the least. This score is calculated using a linear model based on a selection of seven nutritional and ingredient components per $1 0 0 \mathrm { g }$ of product: energy, sugar, saturated fat, sodium (negative factors, $N$ , each awarded 0-10 points), and fiber, protein, and fruit/vegetable content (positive factors, $P$ , each awarded 0-5 points). Each component contributes independently to the final score, and the formula is expressed as: $$ \mathrm { N u t r i - S c o r e } = \ N - \ P . $$ The lower the summation of the individual components, the better the Nutri-Score rating and label the food receives (Table 2). The system uses separate scales for food, beverages, and fats/oils (Table 2), adjusting thresholds to account for differences in nutrient density. Table 2: Nutri-Score rating system ranges for foods, oils, and drinks at each Nutri-Score label.81 Importantly, Nutri-Score does not account for the processing methods used to produce a food item and therefore does not distinguish between whole, minimally processed, and highly processed products. Instead, it relies solely on the three nutrients driving hyper-palatability (sugar, fats, and salt) as negative factors to help identify HPFs. Conversely, positive elements such as fruit, vegetable, fiber, and protein content can help identify MPFs. A key limitation of Nutri-Score lies in the independent scoring of each component. For instance, 1 gram of salt per $1 0 0 \mathrm { g }$ contributes five points across all products, regardless of their broader context or formulation. This means that reformulated highly processed products, designed to lower sugar, fat, or salt content, can receive favorable scores, potentially misleading consumers about their overall health profile. At the same time, PCIs such as olive oil, which often have higher fat content per $1 0 0 \mathrm { g }$ but may offer health benefits when consumed in moderation, have been penalized by earlier versions of the algorithm. As a result, Nutri-Score has faced criticism for oversimplifying nutritional evaluation and failing to reflect the degree of processing. In response to these concerns, an updated Nutri-Score algorithm was released in 2022, refining the scoring thresholds and introducing a dedicated scale for fats, oils, nuts, and seeds to more accurately evaluate many PCIs80. Nevertheless, some experts continue to advocate for integrating additional criteria such as ingredient quality, presence of additives, and extent of industrial processing, to develop a more holistic and health-relevant evaluation of food products. Regardless of these challenges, Nutri-Score remains one of the most widely used and scientifically validated labeling systems. Its key strengths lie in simplicity, transparency, and its proven ability to influence consumer behavior and encourage reformulation by food manufacturers. As discussions continue to refine its algorithm, Nutri-Score may evolve further to incorporate emerging evidence on food processing while continuing to support informed, health-conscious dietary choices. All identified risks associated with food processing under NOVA fall into the NOVA 4 category, which encompasses a broad and diverse range of UPFs. Hence, classification systems such as SIGA seek to enhance the granularity of UPFs78. SIGA presents an advanced framework for evaluating the levels of food processing, connecting two essential viewpoints: the holistic perspective, which examines the structural integrity and interactions within food components, and the reductionist perspective, which emphasizes individual ingredients and technological changes. This model is an evolution of the NOVA classification, offering a more detailed differentiation among food categories by considering further criteria such as the impact of processing on the food matrix, and by rigorously identifying industrial markers of ultra-processing (MUP) and their documented health risks. MUPs are derived from technological processes related to cracking or synthesis, and they can serve as either ingredients or additives in food products. SIGA takes NOVA classes and further divides them into two or more categories, converging to a total of seven classes (Table 3). NOVA 1 is separated into unprocessed foods and those that undergo minimal processing, distinguishing between raw milk and pasteurized milk, raw eggs and bleached eggs, and raw fruits and waxed fruits. Importantly, SIGA assigns the PCIs of NOVA 2 based on their processing; for example, whole wheat flour is found in Class 1 while refined wheat flour is assigned to Class 2. NOVA 3 has been divided according to the concentration of salt, sugar, and fat within the food item, following the recommendations of the Food Standard Agency of the United Kingdom (FSA) and the World Health Organization (WHO). Foods containing less than $1 . 5 ~ \mathrm { g / 1 0 0 g }$ salt, $1 2 . 5 ~ \mathrm { g / 1 0 0 g }$ sugar, and $1 7 . 5 ~ \mathrm { g / 1 0 0 g }$ fat, or beverages with less than $0 . 7 5 ~ \mathrm { g } / 1 0 0 \mathrm { g }$ salt, $6 . 2 5 \ : \mathrm { g / 1 0 0 g }$ sugar, and $8 . 7 5 ~ \mathrm { g / 1 0 0 g }$ fat are considered nutritionally balanced (Class 3), while foods or beverages exceeding at least one of the thresholds are classified as imbalanced (Class 4). By including criteria such as fat, sugar, and salt content, SIGA can differentiate foods based on hyper-palatability, designating those that may encourage overeating as more processed within their SIGA group (Table 3, Column 2). SIGA further distinguishes UPFs by the presence of MUPs within the ingredient list. A single MUP in a food item is sufficient for the food to be classified as a UPF. Similar to the breakdown of NOVA 3, SIGA further categorizes NOVA 4 into three classes through nutritional balance assessment, presence of MUPs, and presence of MUPs with known or uncertain health risks, according to the European Food and Safety Authority (EFSA) and the French Agency for Food, Environmental and Occupational Health and Safety (ANSES). Ingredients like refined oils, starches, xanthan gum, and lecithin are MUPs with no known health risks, placing foods containing these ingredients in Class 5 or Class 6, depending on their nutritional balance. Foods with health-risk MUPs, such as hydrolyzed sugars, modified starches, and sodium nitrite, are categorized as Class 7. A large-scale analysis of over 24,000 packaged food products from French supermarkets using the SIGA system found that approximately $67 \%$ were classified as ultra-processed, with $54 \%$ exhibiting multiple markers of ultra-processing78. Among products containing more than five ingredients, $7 5 \%$ were categorized as ultra-processed, underscoring the strong correlation between complex formulations and industrial food production. These items often contained MUPs like refined oils, hydrolyzed sugars, modified starches, and synthetic flavorings, aligning with public health concerns regarding high UPF consumption, as defined by NOVA. Table 3:The SIGA classification system compared to the NOVA classification syste However, unlike NOVA, SIGA has not yet been widely adopted in epidemiological research. This reflects important differences in the underlying logic of the two systems. NOVA, developed by nutrition epidemiologists, was designed to classify model foods commonly used in food frequency questionnaires and 24-hour recalls — foods that often lack detailed ingredient-level data and primarily report nutrient content. SIGA, by contrast, was built for application to real-world branded food products and relies on ingredient lists and nutrition facts available on the food packaging to detect MUPs and evaluate nutritional balance. These contrasting approaches reveal a broader gap between epidemiological research methods and the complexities of the commercial food supply, which may limit the portability and interoperability of classification systems between domains. # Food Processing Ontologies and Indexing Systems The food processing classification systems described so far were primarily developed within nutritional epidemiology to evaluate health implications related to processed food consumption. In contrast, food scientists have independently developed ontologies and indexing systems that capture a wider range of food attributes, encompassing not only processing but also composition, source, and preparation methods. An ontology is a structured vocabulary that defines standardized terms and their relationships, enabling consistent tagging, classification, and integration of information across datasets. In the food and nutrition domain, ontologies serve as hierarchical frameworks that organize complex food-related data and promote interoperability across platforms and studies. While no ontology was originally created to specifically classify food processing levels, several incorporate elements related to processing and can be adapted for relevant research. Notable examples include LanguaL, FoodEx2, and FoodOn. These systems help standardize food descriptions, link ingredients to preparation techniques, and provide structured foundations for AIbased food classification models. Despite their potential the practical use of these ontologies and indexing systems in machine learning applications remains constrained by the limited availability of large, annotated datasets specifically designed to train models. # LanguaL for Food Processing The LanguaL (Langua Alimentaria) classification system82 is a hierarchical food indexing thesaurus designed to standardize food descriptions across databases and research applications. It provides a multi-faceted approach to describing foods based on their composition, origin, and processing methods. Such granularity is critical for updating national food composition tables as well as for supporting cross-country data harmonization and assessments of public health outcomes. LanguaL has been used for harmonization and linkage of food composition tables with consumption databases, for example by describing and classifying traditional Italian composite dishes83 — capturing multi-ingredient formulations and cooking methods. Furthermore, LanguaL was used in standardizing descriptors for ready-to-eat products to support nutritional labeling, dietary exposure assessments, and regulatory applications84. LanguaL has also been used for the harmonization of nutrient databases across European countries, as its standardized descriptors facilitate the comparison and integration of food composition data essential for multinational nutritional surveillance85. The EPIC nutrient database project used LanguaL as part of its strategy to standardize nutrient databases across several European countries86, ensuring that epidemiological studies could rely on harmonized food composition data when assessing the impact of food processing on nutrient intake and subsequent health effects. Among its various facets, several are particularly relevant to food processing classification as they describe cooking methods, preservation techniques, physical transformations, and industrial treatments. These facets enable a systematic categorization of food processing at distinct levels, making LanguaL a valuable tool for assessing food transformation across large datasets. The following LanguaL facets provide key descriptors of food processing and modification: # 1. Facet F: Cooking Method This facet categorizes food based on the cooking techniques applied, such as boiling, frying, steaming, roasting, baking, and microwaving. Cooking significantly alters food texture, nutrient composition, and digestibility, making it a key determinant of processing levels. # 2. Facet G: Preservation Method Foods are classified based on the preservation techniques used to extend shelf life, including freezing, canning, dehydration, irradiation, and vacuum-sealing. These methods help maintain food stability but may also impact nutritional value and food structure. # 3. Facet H: Packing Medium This facet describes the medium in which food is packed or stored, such as brine, oil, syrup, vinegar, or vacuum packaging. The choice of medium influences food stability, sensory attributes, and potential chemical interactions. # 4. Facet J: Treatment Applied This facet captures industrial and mechanical treatments such as pasteurization, fermentation, hydrogenation, fortification, and enzymatic processing. These treatments often define whether a food remains minimally processed or transitions into a more refined or ultra-processed category. # 5. Facet K: Physical State, Shape or Form Foods are described based on their physical state, such as powdered, granulated, shredded, liquid, or whole form. The degree of structural modification influences processing classification and ingredient functionality in food formulations. # 6. Facet M: Storage and Use Conditions This facet indicates how a food product is stored or intended to be used, such as shelfstable, refrigerated, frozen, or ready-to-eat. Storage requirements reflect processing intensity, as highly processed foods tend to have extended shelf lives due to stabilizers and preservatives. Facets A, B, and $Z$ primarily describe food sources and biological origins; however, some subclasses within these facets provide additional insights into food processing characteristics: # 1. Facet A: Product Type This facet categorizes foods based on their general classification, distinguishing between raw, semi-processed, and processed food items. Subclasses include refined foods such as milled grains, processed dairy, and extracted oils, which reflect varying levels of industrial transformation. # 2. Facet B: Source of the Food Product Although primarily focused on the biological source of foods, certain subclasses describe whether an ingredient has been modified or isolated from its original source (e.g., whole grain vs. refined grain, whole milk vs. powdered milk). # 3. Facet Z: Processing Technologies and Industrial Applications Some subclasses within this facet relate to specific processing technologies and industrydefined food categories, such as fermentation techniques, enzymatic treatments, and specialized food formulations. These descriptors help track industrial processing methods across food supply chains. In summary, the structured vocabulary provided by LanguaL enables researchers, policymakers, and AI-driven food classification models to systematically assess food processing levels across large datasets. Its hierarchical classification allows for possible automated food processing classification in machine learning applications. By leveraging facets related to cooking, preservation, treatment, and physical transformation, LanguaL can offer a granular view of food processing that enhances the accuracy of nutritional and epidemiological assessments. # FoodEx2 Developed by EFSA, FoodEx ${ \boldsymbol { \Omega ^ { 8 7 } } }$ is a standardized food classification system designed for food safety monitoring, dietary exposure assessment, and regulatory reporting. It provides a hierarchical structure that categorizes foods based on their composition, processing level, and intended consumption. Key aspects include: ● Categorization of raw, minimally processed, and processed foods. Tracking of food additives, contaminants, and exposure risks. ● Linking food descriptions to consumption surveys and exposure models. FoodEx2 underpins EFSA’s DietEx tool, which estimates chronic dietary exposure to chemicals by mapping consumption data from the Comprehensive European Food Consumption Database to FoodEx2 codes88. It also drives the ImproRisk model, enabling chronic exposure assessment to a wide range of food-borne hazards through FoodEx2–based categorization89. Beyond chemical risk, FoodEx2 was employed to harmonize the Italian IV SCAI children’s survey data standardizing descriptions of foods, beverages, and supplements consumed by children between 2017 and 2020 in compliance with EFSA’s EU Menu guidelines90. # FoodOn Ontology $\mathrm { F o o d O n } ^ { 9 1 }$ is a comprehensive ontology that models food products, ingredients, and food transformation/processing techniques in a standardized structure. The ontology presents a hierarchical categorization of food products, ranging from raw agricultural commodities to processed foods. A key feature of FoodOn, especially relevant to food processing, is its “Food Transformation Process” class, which provides a structured vocabulary for describing how food products are altered throughout the production, processing, and distribution pipeline. This class is organized hierarchically, allowing food items to be classified according to their degree of processing, transformation method, and the technological interventions applied to them. FoodOn categorizes food transformations into distinct process classes, including: # 1. Physical Transformations Mechanical processing: Grinding, milling, crushing, chopping, slicing. ● Phase transitions: Dehydration, freezing, thawing, sublimation (freeze-drying). ● Structural modifications: Homogenization, emulsification, extrusion. # 2. Thermal Processing Heat-based transformations: Boiling, steaming, roasting, frying, baking, grilling. Cold processing: Refrigeration, freezing, deep-freezing. ● Pasteurization and sterilization: High-pressure processing (HPP), ultra-hightemperature (UHT) treatment. # 3. Chemical and Biochemical Transformations Fermentation processes: Lactic acid fermentation (yogurt, kimchi), alcoholic fermentation (wine, beer). Acidification and alkalization: Pickling, pH modification. Enzymatic processes: Curing, proteolysis, hydrolysis. 0 Food fortification: Addition of vitamins, minerals, amino acids. Addition of preservatives and additives: Emulsifiers, stabilizers, colorants, anticaking agents. # 4. Combination and Formulation Processes Mixing and blending: Ingredient incorporation (e.g., salad dressings, spice blends). Reconstitution: Powdered foods mixed with liquids (e.g., reconstituted fruit juice, instant soups). ● Binding and texturization: Use of hydrocolloids, gelatinization, protein structuring. # 5. Packaging and Storage Processing Modified atmosphere packaging (MAP). Vacuum sealing and controlled-environment storage. Edible coatings for preservation. Each transformation step within FoodOn is connected to both input and output food states, enabling traceability from raw ingredients to final processed products. This structured representation aids researchers in analyzing how food processing impacts nutritional quality, shelf life, and safety. FoodOn consolidates multiple food categorization systems, such as the USDA food composition databases92, LanguaL82, and the Ontology for Biomedical Investigations $\left( \mathrm { O B I } \right) ^ { 9 3 }$ , facilitating interoperability among scientific, industrial, and regulatory datasets. Each food classification ontology integrated offers a complementary evaluation of food processing: ● LanguaL (for food descriptions and processing methods). ● FoodEx2 (for regulatory and exposure assessment applications). ● Open Food Facts (for real-world packaged food processing analysis). OBI and ENVO (Environment Ontology) (for linking food transformations with environmental factors). By integrating these systems, FoodOn offers a unified ontology that links food composition, processing transformations, and health-related metadata. # Emerging Trends in Food Processing Assessment: From Qualitative Systems to AI-Driven Methodologies As mentioned previously, recent studies58,94 have shown that descriptive and subjective classification systems, such as NOVA, suffer from poor inter-rater reliability and reproducibility (see Scientific and Policy Critiques). These issues are further exacerbated by the inconsistent availability of data across food databases to match the descriptions. Meanwhile, more algorithmic systems like Nutri-Score and SIGA have attempted to introduce quantitative rigor into the assessment of product healthfulness by applying fixed thresholds to a few pre-selected nutrients. Yet, this approach brings its own challenges. First, fixed thresholds applied uniformly across food categories can penalize naturally nutrient-dense foods, while rewarding reformulated HPFs that are engineered to meet specific criteria without improving overall quality. Second, the expert selection of only a few nutrients, such as sugar, fat, and sodium, provides an informative but ultimately reductionist picture of food composition and processing. Processing does not just affect isolated biomarkers but alters the concentrations of multiple nutrients in a coordinated manner, with combinations that correlate strongly with the degree of processing94. Focusing on a small set of indicators makes these systems easy to manipulate through targeted reformulation, while ignoring important changes in the remaining food matrix. Third, scoring each nutrient independently using fixed cutoffs or expert-defined weights assumes linear relationships and fails to capture interactions between components, again encouraging optimization of individual nutrients at the expense of holistic food quality. These limitations of one-size-fits-all nutrient scoring systems highlight the broader challenges faced by standard classification methods. By relying on predefined thresholds and expert-driven criteria, such systems often fail to capture the full complexity of food processing and are prone to subjective biases. This complexity instead calls for machine learning (ML) approaches, which are well-suited to detect the combinatorial changes in nutrient composition introduced by food processing. Data-driven models can empirically uncover reproducible patterns, refine classification schemes, and mitigate the inter-rater variability and selection bias that limit expertbased systems — all while offering far greater scalability. In fields such as genomics, ML has already transformed the landscape by enabling the discovery of complex, non-linear associations across massive datasets. Nutrition science may now be approaching a similar inflection point. However, applying AI models to nutrition faces substantial obstacles. First, the underlying data is often incomplete or inconsistently structured. Real-world branded food products, such as those found in grocery stores, are typically described using nutrition facts and ingredient lists66,95. These formats differ markedly from the “model foods” used in epidemiological research, which represent composite averages and often include a broader range of nutrient values but lack ingredient specificity and real-market granularity96. This disconnect creates a major portability gap between public health tools developed in controlled research settings and their application to actual consumer environments. Compounding this challenge is the limited transparency in commercial food manufacturing. Proprietary formulations and undisclosed processing methods often leave researchers with only the nutrient facts and ingredient list to work from, which provide limited insight into food production and allow for significant uncaptured variability97. Furthermore, AI models require large, well-annotated datasets with consistent class labels to perform effectively. In food processing informatics, both the scale and clarity of available data are often lacking. AI systems tend to thrive in scientific domains where open-access databases with rich, standardized metadata support cross-validation, benchmarking, and transferability. In contrast, nutrition science is constrained by a shortage of food composition databases that are comprehensively labeled according to standardized processing schemes, limiting the design and application of AI models. Even when classification or ontology annotations are available, they may introduce additional ambiguity, such as assigning univocal labels to only a small portion of the food supply or classifying the same item in multiple overlapping categories. These inconsistencies complicate supervised learning tasks and increase the risk that AI models may end up replicating human biases rather than revealing mechanistic truths, inheriting the same subjectivity they aim to replace. Nonetheless, promising examples have begun to emerge. In the next section, we examine FoodProX94, a machine learning model designed to estimate the degree of food processing based on nutrient composition. Despite current data limitations, it demonstrates the feasibility of developing classification systems that are both reproducible and adaptive, paving the way for a more evidence-based, data-rich future in nutrition science. # FoodProX FoodProX94 is a ML–based classifier designed to predict the degree of food processing by evaluating the full nutrient composition of a product. Unlike systems such as Nutri-Score or SIGA, which rely on expert-defined thresholds for a small subset of nutrients, FoodProX learns classification boundaries directly from data. Specifically, it maps nutrient concentration patterns to NOVA categories, allowing the model to detect systemic changes introduced by processing. This approach moves beyond hyper-palatability and nutrient-based scoring limited in scope by grounding decision-making in the broader physiological and chemical analysis of nutrient distributions. Indeed, many staple foods originate from once-living organisms, whose nutrient concentrations are shaped by complex metabolic networks98 and constrained by physical and chemical properties99. Therefore, these concentrations tend to follow predictable, universal distributions across the food supply, well approximated by log-normals with a consistent logarithmic standard deviation at different levels of average concentrations (Figure 1a)100,101. As the foundational concept of hyper-palatability suggests, food processing systematically shifts these physiological ranges — either by removing or enriching certain nutrients or by introducing novel compounds — which can distort the biochemical balance that underpins human homeostasis101. Such coordinated deviations across multiple nutrients can be identified by ML models and linked to varying degrees of processing. For instance, chemical and physical transformations during food production, such as vitamin loss from milling or sodium addition for preservation, result in detectable compositional shifts. Importantly, these shifts need to be captured holistically. A clear example is the comparison between raw onions and fried, battered onion rings: in this case, approximately $7 5 \%$ of nutrients change in concentration by more than $10 \%$ , with more than half of the nutrients experiencing tenfold changes (Figures 1b–c). In its original implementation, the FoodProX algorithm was developed as a multi-class random forest classifier trained on nutrient quantities (in grams per 100 grams of food) to estimate the probability that a food belongs to one of the NOVA categories. Its design reflected the data landscape available in 2019, particularly the limited availability of datasets annotated with NOVA labels. At the time, the main resource with NOVA annotations was the USDA’s Food and Nutrient Database for Dietary Studies106 (FNDDS) 2009–2010, the first notable application of the NOVA classification to epidemiological studies in the U.S.102. FNDDS is well suited to ML applications: it provides a rich nutrient profile for thousands of model foods, reporting between 65 and 102 nutrients depending on the cycle, with no missing values. However, only 2,484 items $( 3 4 . 2 5 \% )$ in this dataset were assigned a unique NOVA class, while the remainder were either left unclassified or required decomposition into their constituent ingredients. These aspects contribute to the reproducibility and portability issues documented for NOVA classification. First, manual annotators working outside of FNDDS often rely on food composition databases that report model foods with varying levels of detail, primarily focusing on nutrient composition and lacking ingredient lists103. This limitation prevents ingredient decomposition and renders the assignment of NOVA labels highly subjective. Second, the ingredient fields in FNDDS were primarily designed to map model foods to other USDA datasets, such as the Standard Reference Database, for nutrient calculations. As a result, these ingredient descriptions lack the level of detail found in branded product ingredient lists and do not provide sufficient granularity to assess processing markers like food additives. Leveraging the extensive nutrient panel available in FNDDS, three versions of the FoodProX model were trained using progressively smaller sets of nutrients (99, 62, and 12 variables) to simulate the transition from high-resolution food composition data for epidemiological studies to the more limited FDA-mandated nutrition facts found on branded food products95. Each model was trained using five-fold stratified cross-validation to ensure robust performance estimates across class distributions. Despite the reduction in input features, all models demonstrated commendable predictive ability and stability. The area under the receiver operating characteristic curve (AUC) remained consistently high across the nutrient panels. Even the 12-nutrient model exhibited $0 . 9 8 0 4 \pm 0 . 0 0 1 2$ for NOVA 1, $0 . 9 6 3 2 \pm 0 . 0 0 2 4$ for NOVA 2, $0 . 9 6 9 6 \pm 0 . 0 0 1 8$ for NOVA 3, and $0 . 9 7 8 9 \pm 0 . 0 0 1 5$ for NOVA 4. Additionally, the area under the precision-recall curve (AUP), which is particularly important for imbalanced classification tasks, also showed strong performance in the 12-nutrient model: $0 . 8 8 8 2 \pm 0 . 0 0 4 8$ for NOVA 1, $0 . 7 4 6 8 \pm 0 . 0 0 8 5$ for NOVA 2, $0 . 8 7 2 3 \pm 0 . 0 0 6 0$ for NOVA 3, and $0 . 9 9 1 3 \pm 0 . 0 0 0 6$ for NOVA 4. All metrics significantly exceed random baseline performance, underscoring the strong predictive signal embedded in nutrient composition. This is particularly notable given that NOVA’s manual classification criteria do not formally incorporate nutrient content. The result highlights the latent structure within nutrient profiles that corresponds to the degree of processing, even without explicit expert labeling based on these features. The strong performance of nutrient-based models is particularly encouraging given the current challenges in data standardization within nutrition science and the disconnect between epidemiological studies and real-world grocery environments. Most classification systems are designed for epidemiological research, yet their true public health impact depends on applicability at the consumer level, especially in grocery stores where people make daily food choices. Although both nutrition facts and ingredient lists contribute to the characterization of branded food products, the global inconsistency and poor regulation of ingredient data present significant barriers66. For instance, a GS1 UK data audit found an average of $80 \%$ inconsistency in product information across brands104. As a result, nutrition facts offer the most practical means of bridging the gap between curated model foods and real-world branded products, due to their consistent formatting, broad availability, and high reproducibility. In the European Union, for example, mandatory labeling includes total fat, saturated fat, carbohydrates, sugars, protein, salt, and caloric value, with member states often requiring additional fields105. These data are typically centralized into publicly accessible national databases, making them valuable assets for developing scalable food processing classification systems. The FAO’s International Network of Food Data Systems (INFOODS) provides a global directory of such resources, supporting data interoperability across countries106. FoodProX assigns a set of probability scores $\{ p _ { i } \}$ to each NOVA class, with the final classification determined by the class with the highest probability (Figure 1d–e). These scores represent a point on the probability simplex, a set of four non-negative values that sum to one. Although this simplex exists in a four-dimensional space, the normalization constraint reduces it to a three-dimensional object geometrically equivalent to a tetrahedron. This structure enables diagnostic insights into model behavior. As illustrated in Figure 2a, manually labeled foods cluster distinctly near the corners of the simplex, indicating high classification confidence and justifying FoodProX’s strong predictive performance. Beyond the cross-validation dataset, FoodProX was used to classify the remaining FNDDS entries that lacked NOVA labels or needed further ingredient decomposition. This extended analysis predicts that $7 3 . 3 5 \%$ of the U.S. food supply, as represented in FNDDS, is ultra-processed — a figure confirmed by other estimates63,107. However, the core insight lies not in this percentage, but in the nature of the classifier’s confidence. As shown in Figure 2b, these newly classified foods occupy the interior of the probability space rather than its corners, indicating more diffuse probability distributions and lower classification certainty. This pattern reflects the inherent ambiguity found in complex or composite foods, such as mixed dishes, where nutrient profiles do not align clearly with any single NOVA class. Rather than a limitation, this ambiguity is a key strength of FoodProX. It mirrors the challenges faced by manual assessors and highlights the limitations of hard classification in a domain where many foods exist along a continuum of processing. The probabilistic output of FoodProX thus provides a more nuanced, information-rich alternative to binary or categorical labeling, better capturing the diverse nutritional and processing characteristics within the food supply. Figure 1: Large-scale analysis of nutrient concentrations in food. (a) The concentration probability distribution for four nutrients across the 4,889 foods reported in NHANES 2009–2010 data, shown on a logarithmic horizontal axis. The four distributions are approximately symmetric on a log scale and have similar width and shape that are independent of the average concentration of the respective nutrient. Each symbol represents a histogram bin. (b,c) The observed common scale of nutrient fluctuations observed in the log space allows us to rescale all nutrients and compare them on a single plot, suggesting a methodology to detect foods with outlier concentrations. The pattern of nutrient outliers in different foods (quantified by a z-score in the log space) is informative of the type and extent of processing, as shown here for (b) 100g of raw onion compared with (c) $\mathbf { \nabla } _ { I 0 0 } \mathbf { \sigma } _ { g }$ of onion rings. (d,e) FoodProX is a random forest classifier that was trained over the nutrient concentrations within 100g of each food, tasking the classifier to predict its processing level according to NOVA. FoodProX represents each food by a vector of probabilities $\{ p _ { i } \}$ , capturing the likelihood of the food being classified as an unprocessed food (NOVA 1), a processed culinary ingredient (NOVA 2), a processed food (NOVA 3), or an ultra-processed food (NOVA 4). The final class ication label, highlighted with a box on the right, is determined by the highest probability. The probability values were rounded to two decimal places. Abbreviation: NHANES, National Health and Nutrition Examination Survey. [Reproduced with permission from Reference 101.] The limitations of discrete categorization in food classification motivated the development of FPro, a continuous scoring system that quantifies the degree of food processing along a gradient. While FPro builds upon the NOVA framework, especially in the absence of detailed compound-level data related to structural changes like cellular wall breakdown or specific industrial techniques, it offers a more nuanced alternative. By leveraging the full nutrient profile of a food item, FPro provides a ranking that is less prone to errors than rigid, binary classifications and supports more consistent comparisons across food items otherwise considered identically processed. Formally, FPro is defined as the orthogonal projection of a food’s class probability vector $\{ p _ { i } ^ { k } \}$ onto the line within the probability simplex that extends from the minimally processed vertex (1,0,0,0) to the ultra-processed vertex (0,0,0,1). The score for item $k$ is given by: $$ \begin{array} { r } { F P r o _ { k } = \frac { 1 - p _ { 1 } ^ { k } + p _ { 4 } ^ { k } } { 2 } . } \end{array} $$ This formula captures the trade-off between the FoodProX model’s confidence in classifying food item $k$ as NOVA $\mid \mathbf { l } \ ( \ p _ { 1 } ^ { k } )$ versus NOVA 4 $( p _ { 4 } ^ { k } )$ , the two endpoints of the processing spectrum (Figure 2b). As an example, the score is progressively higher for onion products as they undergo increasing levels of processing, ranging from approximately zero for raw ingredients $( { \mathrm { F P r o } } =$ 0.0203 for raw onion) to one for UPFs $\mathrm { { F P r o } } = 0 . 9 9 5 5$ for onion rings) (Figure 2c). FPro captures nuanced gradations in food processing levels by evaluating the nutrient composition of a product in its entirety, rather than assessing individual nutrients in isolation. Unlike traditional scoring systems that treat each nutrient independently, FPro is inherently non-linear: the impact of a single nutrient on the score depends on its interaction with all other nutrients in the food. This means that the same change in a nutrient’s concentration can result in different shifts in FPro depending on the broader nutrient context. By learning from patterns of correlated nutrient variations within a fixed mass (100 grams), FPro estimates the likelihood that a food’s overall nutrient profile resembles that of unprocessed or ultra-processed foods. For instance, although fortified products may contain similar levels of vitamins and minerals as whole foods, the algorithm can detect atypical concentration patterns, signatures indicative of industrial formulation, which contribute to a higher FPro score. FPro’s ability to translate complex nutritional profiles into a smooth numerical scale makes it ideally suited for recommendation systems that can guide targeted dietary interventions. For consumers, this ranking facilitates informed choices by highlighting less processed alternatives within familiar product categories such as cereals or cookies (Figure 2d). Recently, FPro has been systematically applied to large datasets from major U.S. grocery stores, demonstrating its scalability and practical utility in real-world settings66. Moreover, the continuous nature of the score has proven especially effective for revealing broader trends, such as the correlation between processing level and price per calorie, a relationship that varies significantly across food categories. Figure 2: (a) Visualization of the decision space of FoodProX via principal component analysis of the probabilities $\left\{ p _ { i } \right\}$ . The manual 4-level NOVA classification assigns unique labels to only $34 . 2 5 \%$ of the foods listed in FNDDS 2009–2010 (empty circles). The classification of the remaining foods remains unknown or must be further decomposed into ingredients. The list of foods manually classified by NOVA is largely limited to the three corners of the phase space, foods to which the classifier assigns dominating probabilities. (b) FoodProX assigned NOVA labels to all foods in FNDDS 2009–2010. The symbols at the boundary regions indicate that for these foods the algorithm’s confidence in the classification is not high, hence a 4-class classification does not capture the degree of processing characterizing that food. For each food $k ,$ the processing score FProk represents the orthogonal projection (black dashed lines) of $\vec { p } ^ { k } = ( p _ { 1 } ^ { k } , p _ { 2 } ^ { k } , p _ { 3 } ^ { k } , p _ { 4 } ^ { k } )$ onto the line $p _ { 1 } + p _ { 4 } = 1$ (highlighted in dark red). (c) We ranked all foods in FNDDS 2009/2010 according to FPro. The measure sorts onion products in increasing order of processing, from “Onion, Raw'', to “Onion rings, from frozen''. (d) Distribution of FPro for a selection of the 155 Food Categories in What We Eat in America (WWEIA) 2015–2016 with at least $2 0$ items. WWEIA categories group together foods and beverages with similar usage and nutrient content in the US food supply. $I I ^ { g }$ Sample sizes vary from a minimum of 21 data points for “Citrus fruits” to a maximum of 340 data points for “Fish''. For each box in the box plots, the minimum indicates the lower quartile, the central line represents the median, and the maximum corresponds to the upper quartile. The upper and lower whiskers represent data outside of the inter-quartile range. All categories are ranked in increasing order of median FPro, indicating that within each food group, we have remarkable variability in FPro, confirming the presence of different degrees of processing. We illustrate this through four ready-to-eat cereals, all manually classified as NOVA 4, yet with rather different FPro. While the differences in the nutrient content of Post Shredded Wheat’n Bran $( F P r o = 0 . 5 6 5 8 ,$ ) and Post Shredded Wheat $\langle F P r o = 0 . 5 6 8 5 \$ ) are minimal, with lower fiber content for the latter, the fortification with vitamins, minerals, and the addition of sugar, significantly increases the processing of Post Grape-Nuts $( F P r o = 0 . 9 6 0 3 ,$ ), and the further addition of fats results in an even higher processing score for Post Honey Bunches of Oats with Almonds $\it { F P r o } = 0 . 9 9 9 9 ,$ ), showing how FPro ranks the progressive changes in nutrient content.[Reproduced with permission from Reference 94.] Unlike expert-driven classification systems, FPro is a quantitative algorithm that leverages standardized inputs to generate reproducible, continuous scores, avoiding the use of arbitrary thresholds and maximizing discriminatory power across foods. This design supports sensitivity analyses and uncertainty quantification, both of which are typically absent in traditional frameworks. FoodProX, therefore, marks a significant step toward improving the objectivity and reproducibility of food classification through ML. Looking ahead, as the field’s understanding of food processing deepens, the availability of increasingly large, heterogeneous, and unstructured datasets will demand more advanced modeling approaches. This shift is already paving the way for the adoption of deep learning architectures, including Large Language Models (LLMs), which are now expanding beyond natural language to capture complex biological and nutritional data. # Towards a Deeper Understanding of Food Processing LLMs have emerged as powerful tools for extracting meaningful representations from textual data, making them increasingly integral for generating features in various prediction tasks. By transforming product descriptions, ingredient lists, and other textual attributes into context-rich embeddings, LLMs can capture linguistic nuances that simpler text processing methods may overlook or that tabular data may fail to capture. When integrated with ML algorithms, LLMbased features can facilitate scalable assessments of food processing levels and enhance classification reliability for large datasets. Real-world datasets containing food and nutrition-related data are notorious for incomplete entries, from incomplete ingredient lists to partial or missing nutrient values. LLM-based approaches are a way to deal with such missingness. Because an LLM processes input as a sequence, one can simply omit an unavailable field or include a placeholder (e.g., “unknown”) without breaking the model’s input format108. The model’s contextual embedding will naturally reflect the absence of that information and down-weight its importance, focusing on the available data. Recent research on heterogeneous data imputation leverages this property by inserting mask tokens for missing entries and letting the language model’s context understanding fill in or ignore gaps. In essence, the contextual representation produced by an LLM can capture what is known about a product (e.g., certain ingredients, a few nutrient values) while gracefully handling unknown portions. This starkly contrasts with traditional pipelines that might require explicit imputation, such as filling in missing nutrient values with averages or minima, which may not only introduce bias and errors but also require expertise and additional work. By supporting variable-length and flexible inputs, LLM-based classifiers maintain performance even as data quality varies. In practice, this means we no longer must discard or heavily preprocess records with missing values; the model utilizes whatever information is present, making it highly practical for real-world, heterogeneous databases. This is particularly useful in the domain of food processing classification, where both descriptive (ingredient lists, additives, preparation techniques) and quantitative data (e.g., nutrient values) contribute to determining the level of processing. For example, FoodProX relies on a fixed panel of nutrients to infer processing scores; however, these models may underperform when significant nutrient data is missing and do not currently leverage the noisy but informative textual data captured by branded food product ingredient lists. In contrast, LLM-based models can integrate any available information, whether it is a complete ingredient list, partial nutrient panel, or descriptive metadata, to assess processing level with greater flexibility and less disambiguation efforts. As such, LLMs provide a robust framework for predicting NOVA classes or estimating food processing scores even under realworld data constraints, making them particularly valuable for scalable, automated food classification efforts. In the following section, we present a case study that sets up, trains, and compares machine learning pipelines based on FoodProX and LLM architectures. The primary goal of this work is to evaluate the emergent capabilities of pretrained LLMs when applied to nutritional data, rather than developing them from scratch. # Case Study: Applying Machine Learning and Large Language Models for Food Processing Classification Since the early development of FoodProX, more datasets with metadata on food processing have become available. In this case study, we will utilize data from the Open Food Facts platform to train machine learning models and LLMs for predicting NOVA classes by combining structured and unstructured data. Open Food Facts is an open, crowd-sourced database of branded food products from around the world, based in France, mainly created through the efforts of thousands of volunteers systematically scanning foods in grocery stores109. The database includes a diverse set of attributes such as product names, ingredient lists, nutritional composition, food processing classifications (including NOVA groups, Nutri-Score), brand and packaging details, environmental impact indicators (e.g., carbon footprint), and food safety information (e.g., allergens and additives). The database is maintained by contributors who upload product details, making it a valuable resource for large-scale food analysis. While the quality control cannot compare to curated epidemiological datasets, Open Food Facts is increasingly used in research related to food processing, nutrition, and health due to its broad coverage and organized structure format110. Given its open-access nature, Open Food Facts provides an excellent foundation for machine learning and deep learning applications that aim to assess food processing characteristics, making it an ideal dataset for developing AI-driven models to classify foods by processing levels. The metadata in the Open Food Facts dataset is heterogeneous, comprising both unstructured textual data (such as product descriptions and ingredient lists) and structured numerical data (including nutrient values commonly referred to as nutrition facts). We filtered the Open Food Facts dataset to include only products with English names and complete information for the key fields required in our analysis — product name, ingredient list, NOVA classification, and the 11 nutrients used in the FoodProX models — resulting in a final dataset of 149,960 products. To enhance interpretability and offer practical heuristics, we also incorporated simple engineered features extracted from this information (such as the total number of ingredients and the total number of additives). These variables are hypothesized to correlate with the likelihood that a product is classified as ultra-processed, offering simple, interpretable benchmarks to complement more complex predictive models. Figure 3 illustrates a representative example of the rich and diverse metadata available for a single food product in Open Food Facts. The figure is organized into distinct sections, each showcasing a different type of data that contributes to model development and analysis: Engineered features, such as the number of ingredients and the number of additives, provide a simple and interpretable metric for assessing food processing. These features act as rule-of-thumb indicators111, where a higher count suggests that a food product is ultraprocessed. Tabular numerical data, which includes detailed nutrient values (e.g., fats, carbohydrates, proteins, etc.), offers a precise and structured representation of a product’s nutritional composition. This data is well-suited for traditional machine learning models, enabling rigorous analysis and comparison based on standardized measurements, although limited by missing values. Unstructured textual data, including food descriptions and ingredient lists, is processed using natural language techniques. By converting this information into semantic embeddings, LLM models capture the nuances and context of each food product, even when some information might be missing. Together, these diverse data types offer complementary insights into the composition and quality of food products, enabling the development of a robust array of AI models for classifying levels of food processing that vary in complexity and interpretability. # Product name Crispy Golden Onion Rings - Roundy's - 88 g Ingredient list Diced onions, enriched wheat flour (wheat flour,niacin, ferrous sulfate, thiamine mononitrate,riboflavin,folicacid),vegetable oil (soybean and /or canola), corn starch, wheatflour,water,modifiedcornstarch, contains $2 \%$ or less of calcium chloride, caramel color, cellulose gum, leavening (sodium aluminum phosphate, sodium bicarbonate), oleoresin paprika (color), salt, sodium alginate, spice, sugar, whey, yeast, yellow corn flour. LLM models mixeddata types (unstructured text Nutrient panel (tree-based) Explanatory_models simple statistics data Figure 3: Example instance from the Open Food Facts dataset used in the training of the predictive models. The product name, ingredient list, and full nutrient panel (not fully shown here) are used to construct the input sentences for the LLM-based models. The nutrient panel shown includes the 11 nutrients used to train the FoodProX-based models. The last two quantitative indicators are used in the explanatory models. The number of additives is also included as an additional feature in one variant of the FoodProX model. Figure 4: Case Study Schematic. A diagram illustrating models, input data, and architecture types used in this study. All models are assessed by their ability to predict the NOVA class. # Explanatory Models Before examining more advanced classification approaches, we first established a baseline using two explanatory models to predict NOVA that rely on simple, yet informative features: the number of ingredients and the number of additives. In the ingredient-based model, each food is classified by how many individual components appear on its label, while in the additive-based model, classification is guided by the presence and quantity of listed additives. Both trained classifiers are Random Forest, and we use predefined model splits for cross validation, which remain consistent across the models (explanatory, FoodProX, and LLM-based models). Although these features alone cannot capture the full nuance of food processing, they offer insight into how much transformation or industrial intervention a product may have undergone. By focusing on these straightforward descriptors, we obtain a useful starting point for comparing and contextualizing the more complex FoodProX and LLM-based models introduced later in the chapter. The explanatory models are trained using a Random Forest classifier with predefined splits and a five-fold cross-validation. Hyperparameter tuning is performed by grid search. # FoodProX Models The second part of this case study involves applying the FoodProX algorithm to the Open Food Facts dataset, using two predictive models to classify foods according to the NOVA classification system. The first model uses the set of 11 nutrients as input features to predict the NOVA class of a food product. This approach leverages 11 key nutrients that provide a rich representation of each product’s nutritional composition: proteins, fat, carbohydrates, sugars, fiber, calcium, iron, sodium, cholesterol, saturated fat, and trans-fat. The second model extends the first by including an additional input variable: the total number of additives in a food product. Alongside the original 11 nutrients features, this additive count aims to enhance the model’s ability to estimate NOVA classifications. This inclusion aligns with criteria often used by manual assessors, who explicitly consider additives when identifying ultra-processed foods. Like the explanatory models, both FoodProX models also use a Random Forest classifier, trained using the predefined splits and a five-fold cross-validation with a grid search for hyperparameter tuning. # Leveraging Large Language Models In the final part of the case study, we train a prediction model using contextual vector representations of food products. To do this, we leverage pre-trained transformer-based models such as BERT (Bidirectional Encoder Representations from Transformers)112 and its domainspecific variant, BioBERT113. These models have demonstrated strong performance across various biomedical and clinical tasks, including relation extraction from clinical texts114, detection of adverse drug reactions from biomedical and social media sources115, and identification of relationships between biomedical entities in scientific literature116. BioBERT is a domain-specific adaptation of BERT, pre-trained on large-scale biomedical corpora such as PubMed abstracts and PMC full-text articles. It is particularly well-suited for tasks involving technical or health-related terminology. In the context of food classification, BioBERT is expected to better capture the semantic meaning of ingredients and additives such as “lecithin,” “ascorbic acid,” or “monosodium glutamate,” which appear more frequently in biomedical literature than in general-language corpora. By comparing models using BERT and BioBERT embeddings, we aim to assess whether this domain adaptation improves classification performance for food processing prediction. These models provide context-aware embeddings that capture semantic relationships between words. By leveraging text embeddings as structured features, researchers have successfully used LLMs for food recommendation systems, ingredient substitution models117, and dietary assessment automation118. We employ BERT and BioBERT to generate embeddings from Open Food Facts data, integrating food descriptions, ingredient lists, and nutrient values to classify foods according to NOVA classes. The process involves transforming product metadata into structured text that can be analyzed by the model. Specifically, we first create sentences from the available information for each food: “[FOOD NAME] has the ingredients: [INGREDIENT LIST], and the nutrients: [X UNIT OF NUTRIENT N1], [Y UNIT OF NUTRIENT N2], …” For example: “Chocolate chip cookies have the ingredients: wheat flour, sugar, cocoa butter, chocolate liquor, and the nutrients: 50g of carbohydrates, 5g of protein, 10g of fat, 2g of fiber.” Figure 5: a) Three-dimensional UMAP projection of BERT embeddings colored by NOVA classification. Each point represents a food item embedded using BERT and reduced to three dimensions using the Uniform Manifold Approximation and Projection (UMAP). Points are colored according to their NOVA group. This visualization illustrates the clustering of food items based on linguistic features in their names, showing the separation across NOVA categories. b) 3D UMAP projection of BERT embeddings for onion-based products, colored by NOVA classification. The sequence spans from raw foods (e.g., Onion, Yellow Onions which are NOVA 1) through intermediate forms (e.g., Grilled Cipolline Onions, French Onion Dip, and Crispy Onions which are NOVA 3) to ultra-processed items (e.g., Breaded Onion Rings from NOVA 4), illustrating how embedding positions shift across NOVA groups. By structuring the input in this manner, we enable the model to interpret both categorical and numerical data within a unified framework. Each sentence is then tokenized and passed through BERT/BioBERT, and rather than applying mean pooling across all token embeddings — which can dilute sentence-level meaning when sequences are long or contain uninformative tokens (e.g., units, numbers) — we extract the embedding corresponding to the [CLS] token. This 768- dimensional vector is specifically trained during BERT’s pretraining to represent the entire input sequence, making it well-suited for downstream classification tasks where a holistic understanding of the input is required. To explore how the semantic structure of the created sentences reflects the processing levels, we projected the generated BERT embeddings into three dimensions using the Uniform Manifold Approximation and Projection (UMAP). Figure 5a shows the global embedding landscape of the Open Food Facts database, with each point representing a food item colored by its NOVA classification. While the categories show partial overlaps, distinct regions emerge, particularly between unprocessed foods (NOVA 1) and ultra-processed foods (NOVA 4), indicating that the information contained in the food description, ingredient lists, and nutrient profiles, carries implicit information about processing level. Figure 5b zooms in on a subset of onion-based foods to illustrate a semantic trajectory from minimally processed (Onion, Yellow Onions) to ultraprocessed products (Breaded Onion Rings, Kroger onion rings breaded minced onion). As processing increases, the placement of the embeddings shifts in the embedding space, suggesting that BERT captures subtle linguistic and conceptual transformations linked to processing intensity. Together, these visualizations demonstrate the potential of language models to encode meaningful processing-related patterns in food descriptions. The [CLS] token embeddings extracted from each input sequence are then used as feature vectors to train a classification model that predicts NOVA classes. At this stage, one common approach is to fine-tune the BERT model, starting from its pretrained weights and continuing training to optimize the embeddings for the selected classification task. However, in our case, we did not finetune either BERT or BioBERT. Instead, we used the [CLS] embeddings as fixed input features for separate downstream classifiers. This decision was based on the strong classification performance already achieved and the fact that fine-tuning introduces substantially higher computational costs. We trained three different classification architectures: two tree-based models (Random Forest and XGBoost) and one neural network. Each classifier was trained twice, once using the [CLS] embeddings from BERT and once using those from BioBERT. All LLM-based models were trained using the predefined data splits, with randomized hyperparameter tuning and five-fold cross-validation, in line with the training protocols adopted for both the FoodProX and explanatory models. # Comparing the Models All models follow the same two-stage validation protocol. First, a stratified $20 \%$ of the dataset is held out for hyperparameter tuning. The remaining $80 \%$ is partitioned into five fixed train/test stratified folds for cross-validation. For the FoodProX and explanatory models, we perform a grid search over the hyperparameter space. In contrast, for the BERT and BioBERT models, we apply a randomized hyperparameter search to reduce computation time. Once optimal hyperparameters are selected using the tuning set, we train five independent models, one per fold, using these settings and report the average performance across the five held-out test sets. This standardized procedure ensures that all models are evaluated under identical data splits, allowing for fair and reproducible comparisons of classification performance (in terms of AUC, AUP, and related metrics). The AUC and AUP values vary substantially across models and NOVA classes (Figure 6). Explanatory models based solely on ingredient and additive counts consistently underperform relative to other approaches, particularly for NOVA 2 and NOVA 3, where their predictive power is markedly limited. In contrast, both FoodProX models demonstrate strong discrimination (AUC) and precision–recall performance (AUP), confirming the high predictive value of nutrient composition when assessed holistically. The model using only 11 nutrients already achieves high separability across classes (AUCs: 0.988, 0.983, 0.926, 0.948; AUPs: 0.941, 0.815, 0.802, 0.974). Augmenting this with the additive count further improves performance, reaching AUCs of 0.993, 0.988, 0.966, and 0.980 and AUPs of 0.956, 0.860, 0.988, and 0.991. This agrees with the preliminary observations on Open Food Facts reported in [94]. Figure 6: Comparative AUC and AUP scores for each classification model across the four NOVA classes $( I { - } 4 )$ , illustrating model discrimination $\mathbf { \Pi } ( A \pmb { U } \pmb { C } )$ and precision–recall trade-off (AUP) per class. a) The $R O C$ (Receiver Operating Characteristic) curve plots the true positive rate (sensitivity) against the false positive rate (1 – specificity) at varying classification thresholds. $A U C$ (Area Under the ROC Curve) quantifies overall model discrimination: an AUC of 0.5 indicates random performance, while 1.0 denotes perfect class separation. Higher AUC values mean the model better distinguishes between positive and negative instances, regardless of threshold choice. b) The Precision–Recall curve plots precision (positive predictive value) versus recall (sensitivity) over different thresholds, particularly informative on imbalanced datasets. Area Under the Precision–Recall Curve (AUP) summarizes the balance between capturing true positives (recall) and limiting false positives (precision). A high AUP indicates the model maintains both high precision and high recall, especially important when class prevalence varies widely (as with NOVA classes). BERT- and BioBERT-based classifiers achieve among the highest AUCs overall — up to 0.995 for NOVA 1 and 0.993 for NOVA 2 — demonstrating their capacity for capturing rich contextual patterns. However, their AUP scores, while strong, are more comparable to other models except for NOVA 1 $( \mathrm { A U P } = 0 . 9 6 6 )$ . This suggests that while LLM-derived embeddings excel in overall ranking and discrimination, they yield broader probability distributions for ambiguous items, which can reduce precision–recall balance at fixed thresholds. Additionally, the use of randomized hyperparameter search for these models, due to computational constraints, may have limited their ability to fully match the grid-searched FoodProX models. # Class-by-Class Observations # NOVA 1 (Unprocessed/Minimally Processed) All advanced classifiers demonstrate excellent performance in identifying unprocessed or minimally processed foods. The simplest baseline models, based solely on ingredient or additive counts, perform the worst, with the ingredient count model outperforming the additive-based one. Among the more sophisticated methods, the BERT/BioBERT models paired with neural networks and XGBoost achieve the highest AUC scores (0.994 and 0.995), followed closely by FoodProX with 11 nutrients plus additive count $\mathrm { ( A U C = 0 . 9 9 3 ) }$ ) and FoodProX using only 11 nutrients $( \mathrm { A U C } = 0 . 9 8 8 )$ . In terms of precision–recall performance, the embedding models again lead with AUP scores of 0.966 and 0.964, respectively, followed by FoodProX 11 nutrients plus additives $( \mathrm { A U P } \ : = \ : 0 . 9 5 6 )$ and FoodProX 11 nutrients $( \mathrm { A U P } = 0 . 9 4 1 )$ ). Overall, nutrient-based models alone already achieve near-perfect separability for NOVA 1. Incorporating contextual embeddings from text provides modest but consistent improvements in both ranking (AUC) and precision– recall (AUP). # NOVA 2 (Processed Culinary Ingredients) NOVA 2 is the smallest and most challenging category to classify. The BERT and BioBERT neural network classifiers achieve the highest AUC scores at 0.993, followed by the XGBoost models at 0.992. FoodProX model with 11 nutrients plus additive count reaches $\mathrm { A U C } = 0 . 9 8 8$ , while the simpler FoodProX 11-nutrient model scores $\mathrm { A U C } = 0 . 9 8 3$ . When comparing AUP scores, the FoodProX 11 nutrients plus additives model leads with 0.860, closely followed by the BERT XGBoost model (0.847) and the BioBERT neural network (0.846). The FoodProX model without additive count scores 0.815, while the BERT and BioBERT random forest classifiers record the lowest AUPs among the embedding models (0.743 and 0.760, respectively). FoodProX’s advantage lies in producing well-separated probability estimates for NOVA 2, allowing for a clearer threshold that balances precision and recall, thus improving AUP. By contrast, although embedding-based models slightly outperform in overall ranking ability (as reflected in higher AUC), their probability distributions for NOVA 2 tend to overlap more with those of other classes. This may stem from the ambiguous role of culinary ingredients, which often appear as ingredients in other NOVA classes. As a result, it becomes more challenging for these models to identify a single decision threshold that preserves both high precision and high recall. # NOVA 3 (Processed Foods) NOVA 3, like NOVA 2, is among the most challenging classes to classify. This is due primarily to the variability of food products within the category and the inconsistency in manual labeling. The highest AUC of 0.966 is achieved by the FoodProX model using 11 nutrients combined with additive count, followed closely by the BERT XGBoost model at 0.962 and the BioBERT Neural Network at 0.961. In terms of AUP, FoodProX 11 nutrients plus additives again leads with 0.888, followed by BERT XGBoost at 0.880. The FoodProX model with only 11 nutrients performs reasonably well $( \mathrm { A U C } = 0 . 9 2 6$ , AUP $\mathbf { \Sigma } = \mathbf { \Sigma }$ 0.802), while the simpler ingredient and additive count baselines perform substantially worse (ingredient count $\mathrm { A U C } = 0 . 7 7 7$ , $\mathrm { A U P } = 0 . 4 1 5$ ; additive count $\mathrm { A U C } = 0 . 8 3 4$ , $\mathbf { A U P } =$ 0.437). Their improvement over NOVA 2 results reflects the fact that NOVA 3 foods often contain multiple ingredients and additives, making these basic features somewhat more informative and variable. These results show that supplementing the nutrient panel with a count of additives increases FoodProX’s performance and brings it in line with the topperforming embedding models. # NOVA 4 (Ultra-Processed Foods) As the majority class, NOVA 4 is the easiest to classify across all models, as these products typically list multiple additives and have distinctive nutrient profiles. Even the simplest exploratory models perform strongly in this category, achieving an AUP of 0.919 with the ingredient count and 0.948 with the additive count. The FoodProX model using only 11 nutrients attains an AUC of 0.948 and an AUP of 0.974. When the additive count is included, these metrics increase to an AUC of 0.980 and an AUP of 0.991. Embeddingbased models perform comparably, with top AUCs reaching 0.978 and AUPs up to 0.990. In this class, both the enhanced FoodProX model and the leading embedding pipelines reach top-level performance, indicating that either nutrient composition or contextual embeddings are sufficient to reliably classify ultra-processed foods. # Time and Computational Resources Model runtimes varied widely depending on input dimensionality and algorithmic complexity. The two explanatory models completed hyperparameter tuning in about 2 minutes and the full fivefold cross-validation in under 3 minutes. The FoodProX Random Forest model on 11 nutrients required around 22 minutes of grid search hyperparameter tuning plus 8 minutes of crossvalidation model training, totaling 30 minutes, while when additives were included, the total runtime remained under 30 minutes. In contrast, the embedding-based classifiers ran substantially longer: the neural network classifiers built on BERT or BioBERT embeddings took 50-75 minutes end-to-end (approximately 55 minutes for BioBERT, and approximately 1 hour and $1 4 ~ \mathrm { m i n }$ for BERT), while the tree-based models using BERT and BioBERT embeddings as features required multiple hours (e.g., two hours and $3 4 ~ \mathrm { m i n }$ for BioBERT Random Forest classifier and over 3 hours for BioBERT XGBoost classifier). All experiments were carried out on a CPU‐only server (36 physical cores, 64 GB RAM). Generating the embeddings on the sentences took about 3 hours using 36-core CPU with 64 GB of RAM, but less than 15 minutes when using 4 Nvidia V100 SXM2 GPU. Hyperparameter search ranges were chosen to balance computational tractability and predictive performance, and, as our results show, these constraints had minimal impact on final accuracy. FoodProX marks a substantial step forward in food processing classification by addressing critical limitations of simpler, manually curated systems: Scalability: Its automated, nutrient-based framework enables high-throughput classification across large food databases with modest computational resources. Reproducibility: The model relies on standardized and well-regulated nutrient data, ensuring consistency across datasets. LLM-based models, such as those built on BERT and BioBERT, offer complementary advantages that make them particularly effective in handling branded food products: Context-Aware Representation: The [CLS] token captures high-level interactions between ingredients and nutrients, improving classification accuracy compared to averaging token embeddings. Robustness to Missing Data and Input Heterogeneity: The contextual associations enable the models to infer patterns even when faced with incomplete nutritional fields and poor standardization of the ingredients and their synonyms. Automation & Efficiency: LLM-based methods reduce manual feature engineering efforts, making predictive analysis more efficient and scalable. LLM-based models are particularly well-suited for real-world applications. Their resilience to inconsistent or incomplete records, common in large-scale datasets like Open Food Facts, makes them reliable for deployment in practice. Moreover, their flexibility allows for seamless incorporation of new information without requiring architectural adjustments. Pretrained LLMs can be fine-tuned on large volumes of food data, adapted to multilingual data, and scaled globally, reducing the need for extensive manual normalization. Importantly, all model types discussed in this study, whether based on nutrient composition, additive counts, or contextual embeddings, can also be used to compute the continuous FPro score. Since FPro is derived from the model’s class probability vector, it is agnostic to the underlying architecture. Regardless of whether the classifier is a FoodProX model or an LLM-based approach, FPro can be calculated as the projection of the predicted class probabilities onto the axis spanning from NOVA 1 to NOVA 4. This flexibility allows practitioners to select a modeling strategy best suited to their data and computational constraints, while still accessing both discrete NOVA labels and a nuanced, continuous measure of food processing. Although this study has focused primarily on NOVA classification performance, the same outputs can readily support FPro-based analyses without any modification to the models. # Limitations and Future Perspectives While the results presented here are encouraging, several limitations must be acknowledged, along with opportunities for future refinement. First, the quality and consistency of the underlying data remain a significant constraint. Although the Open Food Facts dataset is large, diverse, and publicly accessible, it contains inconsistencies that can affect model reliability. For example, formatting errors in nutrient values, such as the use of decimal commas instead of points, occasionally led to implausibly high concentrations of micronutrients like vitamin A and vitamin C. Additionally, some NOVA labels appear to be misassigned, with foods more appropriately classified as NOVA 3 being labeled as NOVA 2. Such noise introduces uncertainty into both training and evaluation, and may account for some misclassifications across models. Second, the dataset suffers from class imbalance, with NOVA 2 being substantially underrepresented. This makes it particularly difficult for models to learn reliable patterns for culinary ingredients, which often lack distinctive signals in either nutrient composition or additive profiles. To address this imbalance, future research could explore how to tailor resampling techniques, class weighting strategies, or data augmentation methods to the unique challenges of food composition data, while also accounting for its inherent redundancy. Third, while the use of LLMs has proven effective, the construction of input text could be further optimized. At present, inputs are created by simply concatenating product names, ingredient lists, and nutrition facts into a single sequence. A more targeted approach may help the model better isolate which components carry the most predictive signal. Future work should enhance the interpretability of these models by including feature importance analyses to determine which phrases or descriptors are the most informative, and experiments with different prompt structures or token-level attention to enhance the quality of the embeddings. Enhancing interpretability is especially important given the trade-off between predictive power and model transparency. Unlike models such as FoodProX, which offer interpretable outputs by quantifying the influence of individual features (e.g., specific nutrients or additive counts), LLM-based models currently operate as black boxes: their contextual embeddings make it difficult to trace decisions back to specific input components, such as an ingredient or a numerical value. Finally, this work underscores the potential of automated food classification systems for public health monitoring and research, particularly when applied to large-scale datasets. Across all NOVA classes, combining nutrient profiles with either textual embeddings (as in BERT and BioBERT models) or simpler features like additive counts (as in FoodProX) consistently outperformed models that rely on single, coarse-grained features, often leveraged by manual assessors. For projects with limited computational resources, FoodProX models offer a strong balance of efficiency and accuracy. In more resource-rich settings, LLM-based embeddings may provide added value, particularly when dealing with ambiguous or borderline food items, by leveraging the deeper context contained in product descriptions and ingredient lists. # Code Availability All codes generated for the analysis are available through Dr. Menichetti’s Lab GitHub repository at https://github.com/menicgiulia/AI4FoodProcessing. # Acknowledgments G.M. is supported by NIH/NHLBI grant K25HL173665 and 24MERIT 1185447. This manuscript is a preprint version of a chapter in “Agrifood Informatics” (The Royal Society of Chemistry, 2026). # References 1 Hall KD, Ayuketah A, Brychta R, Cai H, Cassimatis T, Chen KY et al. Ultra-Processed Diets Cause Excess Calorie Intake and Weight Gain: An Inpatient Randomized Controlled Trial of A Libitum Food Intake. Cell Metab 2019; 30: 67-77.e3. 2 Mendoza K, Smith-Warner SA, Rossato SL, Khandpur N, Manson JE, Qi L et al. Ultraprocessed foods and cardiovascular disease: analysis of three large US prospective cohorts and systematic review and meta-analysis of prospective cohort studies. The Lancet Regional Healt Americas 2024; 37: 100859. 3 Ugai T, Sasamoto N, Lee H-Y, Ando M, Song M, Tamimi RM et al. Is early-onset cancer an emerging global epidemic? Current evidence and future implications. Nat Rev Clin Oncol 2022 19: 656–673. 4 European Food Information Council. Food processing. 2025.https://www.eufic.org/en/foodproduction/category/food-processing (accessed 16 Feb2025). 5 United States Department of Agriculture, Zachary JC. Processing & Marketing. 2025.https://www.ers.usda.gov/topics/food-markets-prices/processing-marketing (accessed 16 Feb2025). 6 Food and Agriculture Organization of the United Nations, Fellows P. FAO Diversification booklet 5 Processed foods for improved livelihoods. 2004.http://www.fao.org/docrep/007/y5113e/y5113e04.htm (accessed 16 Feb2025). 7 Liu L, Wang J, Rosenberg D, Zhao H, Lengyel G, Nadel D. Fermented beverage and food storage in 13,000 y-old stone mortars at Raqefet Cave, Israel: Investigating Natufian ritual feasting. J Archaeol Sci Rep 2018; 21: 783–793. 8 Misra NN, Koubaa M, Roohinejad S, Juliano P, Alpas H, Inácio RS et al. Landmarks in the historical development of twenty first century food processing technologies. Food Research International 2017; 97: 318–339. 9 Huebbe P, Rimbach G. Historical Reflection of Food Processing and the Role of Legumes as Part of a Healthy Balanced Diet. Foods 2020; 9: 1056. 10 History Channel, Randle A. Who Invented the TV Dinner? 2021.https://www.history.com/news/tv-dinner-history-inventor (accessed 16 Feb2025). 11 Our World in Data, Giattino C, Ortiz-Ospina E, Roser M. Working Hours. 2020.https://ourworldindata.org/working-hours (accessed 16 Feb2025). 12 Moss M. Salt Sugar Fat: How the Food Giants Hooked Us. Randam House: New York, 20213. 13 Fazzino TL, Rohde K, Sullivan DK. Hyper‐Palatable Foods: Development of a Quantitative Definition and Application to the US Food System Database. Obesity 2019; 27: 1761–1768. 14 de Macedo IC, de Freitas JS, da Silva Torres IL. The Influence of Palatable Diets in Reward System Activation: A Mini Review. Adv Pharmacol Sci 2016; 2016: 1–7. 15 Gibney MJ, Forde CG. Nutrition research challenges for processed food and health. Nat Food 2022; 3: 104–109. 16 Institute of Food Science and Technology. IFST Food Processing Knowledge Hub. https://www.ifst.org/knowledge-hubs/food-processing (accessed 16 Feb2025). 17 FHA Food and Beverage. From Farm to Table: Understanding Food Processing. 2023.https://fhafnb.com/glossary/food-processing/ (accessed 16 Feb2025). 18 Dani S. Food Supply Chain Management and Logistics: From Farm to Fork. 1st ed. Kogan Pa Ltd, 2015. Jones JM. Food processing: criteria for dietary guidance and public health? Proceedings of the Nutrition Society 2019; 78: 4–18. USApple. Apples and Wax Backgrounder. 2024.https://usapple.org/news-resources/apples-andwax-backgrounder (accessed 16 Feb2025). Monteiro CA. Nutrition and health. The issue is not food, nor nutrients, so much as processing. Public Health Nutr 2009; 12: 729–731. Monteiro CA, Cannon G, Levy RB, Moubarac J-C, Jaime P, Martins AP et al. NOVA. The star shines bright. Position paper 2. World Nutrition 2016; 7: 28–38. Sharma LL, Teret SP, Brownell KD. The Food Industry and Self-Regulation: Standards to Promote Success and to Avoid Public Health Failures. Am J Public Health 2010; 100: 240–246. Pomeranz JL, Broad Leib EM, Mozaffarian D. Regulation of Added Substances in the Food Supply by the Food and Drug Administration Human Foods Program. Am J Public Health 2024; 114: 1061–1070. European Commission. Food and Feed Information Portal Database: Food Additives. 2025.https://ec.europa.eu/food/food-feed-portal/screen/food-additives/ (accessed 13 May2025). Chen X, Zhang Z, Yang H, Qiu P, Wang H, Wang F et al. Consumption of ultra-processed foods and health outcomes: a systematic review of epidemiological studies. Nutr J 2020; 19: 86. Vandevijvere S, Jaacks LM, Monteiro CA, Moubarac J, Girling‐Butcher M, Lee AC et al. Global trends in ultraprocessed food and drink product sales and their association with adult body mass index trajectories. Obesity Reviews 2019; 20: 10–19. Delpino FM, Figueiredo LM, Bielemann RM, da Silva BGC, dos Santos FS, Mintem GC et al. Ultra-processed food and risk of type 2 diabetes: a systematic review and meta-analysis of longitudinal studies. Int J Epidemiol 2022; 51: 1120–1141. Srour B, Fezeu LK, Kesse-Guyot E, Allès B, Méjean C, Andrianasolo RM et al. Ultra-processed food intake and risk of cardiovascular disease: prospective cohort study (NutriNet-Santé). BMJ 2019; : l1451. Popkin BM, $\operatorname { N g } \operatorname { S W }$ . The nutrition transition to a stage of high obesity and noncommunicable disease prevalence dominated by ultra‐processed foods is not inevitable. Obesity Reviews 2022; 23. doi:10.1111/obr.13366. Clark SE, Hawkes C, Murphy SME, Hansen-Kuhn KA, Wallinga D. Exporting obesity: US farm and trade policy and the transformation of the Mexican consumer food environment. Int J Occup Environ Health 2012; 18: 53–64. Milanlouei S, Menichetti G, Li Y, Loscalzo J, Willett WC, Barabási A-L. A systematic comprehensive longitudinal evaluation of dietary factors associated with acute myocardial infarction and fatal coronary heart disease. Nat Commun 2020; 11: 6074. Zafar MI, Mills KE, Zheng J, Regmi A, Hu SQ, Gou L et al. Low-glycemic index diets as an intervention for diabetes: a systematic review and meta-analysis. Am J Clin Nutr 2019; 110: 891–902. Machado PP, Steele EM, Levy RB, Sui Z, Rangan A, Woods J et al. Ultra-processed foods and recommended intake levels of nutrients linked to non-communicable diseases in Australia: evidence from a nationally representative cross-sectional study. BMJ Open 2019; 9: e029544. Aguayo-Patrón S, Calderón de la Barca A. Old Fashioned vs. Ultra-Processed-Based Current Diets: Possible Implication in the Increased Susceptibility to Type 1 Diabetes and Celiac Disease in Childhood. Foods 2017; 6: 100. Darcey VL, Guo J, Chi M, Chung ST, Courville AB, Gallagher I et al. Brain dopamine responses to ultra-processed milkshakes are highly variable and not significantly related to adiposity in humans. Cell Metab 2025; 37: 616-628.e5. Robinson E, Johnstone AM. Ultraprocessed food (UPF), health, and mechanistic uncertainty: What should we be advising the public to do about UPFs? PLoS Med 2024; 21: e1004439. Dicken SJ, Batterham RL. The Role of Diet Quality in Mediating the Association between UltraProcessed Food Intake, Obesity and Health-Related Outcomes: A Review of Prospective Cohort Studies. Nutrients 2021; 14: 23. Aguilera JM. The food matrix: implications in processing, nutrition and health. Crit Rev Food Sci Nutr 2019; 59: 3612–3629. Parada J, Aguilera JM. Food microstructure affects the bioavailability of several nutrients. J Food Sci 2007. doi:10.1111/j.1750-3841.2007.00274.x. Berry SE, Tydeman EA, Lewis HB, Phalora R, Rosborough J, Picout DR et al. Manipulation of lipid bioaccessibility of almond seeds influences postprandial lipemia in healthy human subjects. Am J Clin Nutr 2008; 88: 922–929. Grassby T, Mandalari G, Grundy MM-L, Edwards CH, Bisignano C, Trombetta D et al. In vitro and in vivo modeling of lipid bioaccessibility and digestion from almond muffins: The importance of the cell-wall barrier mechanism. J Funct Foods 2017; 37: 263–271. Novotny JA, Gebauer SK, Baer DJ. Discrepancy between the Atwater factor predicted and empirically measured energy values of almonds in human diets. Am J Clin Nutr 2012; 96: 296– 301. Wyatt P, Berry SE, Finlayson G, O’Driscoll R, Hadjigeorgiou G, Drew DA et al. Postprandial glycaemic dips predict appetite and energy intake in healthy individuals. Nat Metab 2021; 3: 523–529. Corbin KD, Carnero EA, Dirks B, Igudesman D, Yi F, Marcus A et al. Host-diet-gut microbiome interactions influence human energy balance: a randomized clinical trial. Nat Commun 2023; 14: 3161. Naimi S, Viennois E, Gewirtz AT, Chassaing B. Direct impact of commonly used dietary emulsifiers on human gut microbiota. Microbiome 2021; 9: 66. Suez J, Cohen Y, Valdés-Mas R, Mor U, Dori-Bachash M, Federici S et al. Personalized microbiome-driven effects of non-nutritive sweeteners on human glucose tolerance. Cell 2022; 185: 3307-3328.e19. Kwon YH, Banskota S, Wang H, Rossi L, Grondin JA, Syed SA et al. Chronic exposure to synthetic food colorant Allura Red AC promotes susceptibility to experimental colitis via intestinal serotonin in mice. Nat Commun 2022; 13: 7617. Jarmakiewicz - Czaja S, Piątek D, Filip R. The impact of selected food additives on the gastrointestinal tract in the example of nonspecific inflammatory bowel diseases. Archives of Medical Science 2021. doi:10.5114/aoms/125001. Rifai L, Saleh FA. A Review on Acrylamide in Food: Occurrence, Toxicity, and Mitigation Strategies. Int J Toxicol 2020; 39: 93–102. Martínez Steele E, Khandpur N, da Costa Louzada ML, Monteiro CA. Association between dietary contribution of ultra-processed foods and urinary concentrations of phthalates and bisphenol in a nationally representative sample of the US population aged 6 years and older. PLoS One 2020; 15: e0236738. Tumu K, Vorst K, Curtzwiler G. Endocrine modulating chemicals in food packaging: A review of phthalates and bisphenols. Compr Rev Food Sci Food Saf 2023; 22: 1337–1359. Birlouez-Aragon I, Morales F, Fogliano V, Pain J-P. The health and technological implications of a better control of neoformed contaminants by the food industry. Pathologie Biologie 2010; 58: 232–238. Ramadan M, Cooper B, Posnack NG. Bisphenols and phthalates: Plastic chemical exposures can contribute to adverse cardiovascular health outcomes. Birth Defects Res 2020; 112: 1362–1385. Matuszczak E, Komarowska MD, Debek W, Hermanowicz A. The Impact of Bisphenol A on Fertility, Reproductive System, and Development: A Review of the Literature. Int J Endocrinol 2019; 2019: 1–8. Dalamaga M, Kounatidis D, Tsilingiris D, Vallianou NG, Karampela I, Psallida S et al. The Role of Endocrine Disruptors Bisphenols and Phthalates in Obesity: Current Evidence, Perspectives and Controversies. Int J Mol Sci 2024; 25: 675. Gibney MJ. Ultra-Processed Foods: Definitions and Policy Issues. Curr Dev Nutr 2019; 3: nzy077. Braesco V, Souchon I, Sauvant P, Haurogné T, Maillot M, Féart C et al. Ultra-processed foods: how functional is the NOVA system? Eur J Clin Nutr 2022; 76: 1245–1253. Mialon M, Serodio P, Scagliusi FB. Criticism of the NOVA classification: who are the protagonists? World Nutrition 2018; 9: 176–240. Gibney MJ, Forde CG, Mullally D, Gibney ER. Ultra-processed foods in human health: a critical appraisal. Am J Clin Nutr 2017; 106: 717–724. Messina M, Messina V. Nova fails to appreciate the value of plant‐based meat and dairy alternatives in the diet. J Food Sci 2025; 90. doi:10.1111/1750-3841.70039. Vandevijvere S, Jaacks LM, Monteiro CA, Moubarac J, Girling‐Butcher M, Lee AC et al. Global trends in ultraprocessed food and drink product sales and their association with adult body mass index trajectories. Obesity Reviews 2019; 20: 10–19. Baldridge AS, Huffman MD, Taylor F, Xavier D, Bright B, Van Horn L V. et al. The Healthfulness of the US Packaged Food and Beverage Supply: A Cross-Sectional Study. Nutrients 2019; 11: 1704. Katidi A, Vlassopoulos A, Noutsos S, Kapsokefalou M. Ultra-Processed Foods in the Mediterranean Diet according to the NOVA Classification System; A Food Level Analysis of Branded Foods in Greece. Foods 2023; 12: 1520. Frank T, Thow A-M, Ng SW, Ostrowski J, Bopape M, Swart EC. A Fit-for-Purpose Nutrient Profiling Model to Underpin Food and Nutrition Policies in South Africa. Nutrients 2021; 13: 2584. Ravandi B, Ispirova G, Sebek M, Mehler P, Barabási A-L, Menichetti G. Prevalence of processed foods in major US grocery stores. Nat Food 2025. doi:10.1038/s43016-024-01095-7. Qian F, Riddle MC, Wylie-Rosett J, Hu FB. Red and Processed Meats and Health Risks: How Strong Is the Evidence? Diabetes Care 2020; 43: 265–271. Feinstein MJ, Hsue PY, Benjamin LA, Bloomfield GS, Currier JS, Freiberg MS et al. Characteristics, Prevention, and Management of Cardiovascular Disease in People Living With HIV: A Scientific Statement From the American Heart Association. Circulation 2019; 140. doi:10.1161/CIR.0000000000000695. Ludwig DS, Willett WC, Putt ME. Wash-in and washout effects: mitigating bias in short term dietary and other trials. BMJ 2025; : e082963. Menichetti G, Barabási A-L, Loscalzo J. Chemical Complexity of Food and Implications for Therapeutics. New England Journal of Medicine 2025; 392: 1836–1845. 71 Chantal J, Hercberg S, World Health Organization. Regional Office for Europe. Development of a new front-of-pack nutrition label in France: the five-colour Nutri-Score. Public Health Panorama 2017; 3: 712–725. 72 Commonwealth of Australia. Health Star Rating System. 2024.http://healthstarrating.gov.au/ (accessed 13 Mar2025). 73 Yuca. Yuka. 2024.https://yuka.io/en/ (accessed 13 Mar2025). 74 CoCo Positivo SL. GoCoCo. 2024.https://www.gococo.app/ (accessed 13 Mar2025). 75 Lacy-Nichols J, Freudenberg N. Opportunities and limitations of the ultra-processed food framing. Nat Food 2022; 3: 975–977. 76 Slimani N, Deharveng G, Southgate DAT, Biessy C, Chajès V, van Bakel MME et al. Contribution of highly industrially processed foods to the nutrient intakes and patterns of middleaged populations in the European Prospective Investigation into Cancer and Nutrition study. Eur J Clin Nutr 2009; 63: S206–S225. 77 Poti JM, Mendez MA, Ng SW, Popkin BM. Is the degree of food processing and convenience linked with the nutritional quality of foods purchased by US households? Am J Clin Nutr 2015; 101: 1251–1262. 78 Davidou S, Christodoulou A, Fardet A, Frank K. The holistico-reductionist Siga classification according to the degree of food processing: an evaluation of ultra-processed foods in French supermarkets. Food Funct 2020; 11: 2026–2039. 79 Merz B, Temme E, Alexiou H, Beulens JWJ, Buyken AE, Bohn T et al. Nutri-Score 2023 update. Nat Food 2024; 5: 102–110. 80 Santé publique France. Transnational governance of Nutri-Score: the 7 engaged countries adopt an improved algorithm for food. 2022.https://www.santepubliquefrance.fr/en/transnationalgovernance-of-nutri-score-the-7-engaged-countries-adopt-an-improved-algorithm-for-food (accessed 13 Mar2025). 81 Santé publique France. Nutri-Score Questions & Answers English Version. 2024. 82 Ireland JD, Møller A. LanguaL Food Description: a Learning Process. Eur J Clin Nutr 2010; 64: S44–S48. 83 Durazzo A, Camilli E, D’Addezio L, Sette S, Marconi S, Piccinelli R et al. Italian composite dishes: description and classification by LanguaLTM and FoodEx2. European Food Research and Technology 2020; 246: 287–295. 84 Durazzo A, D’Andrea T, Gabrielli P, Pilla N, Aguzzi A, Lucarini M et al. Development of a Database of LanguaLTM and FoodEx2 Codes of 50 Ready-to-Eat Products. Nutrients 2024; 16: 1151. 85 Delgado A, Issaoui M, Vieira MC, Saraiva de Carvalho I, Fardet A. Food Composition Databases: Does It Matter to Human Health? Nutrients 2021; 13: 2816. 86 Slimani N, Deharveng G, Unwin I, Southgate DAT, Vignat J, Skeie G et al. The EPIC nutrient database project (ENDB): a first attempt to standardize nutrient databases across the 10 European countries participating in the EPIC study. Eur J Clin Nutr 2007; 61: 1037–1056. 87 The European Food Safety Authority. The Food Classification and Description System FoodEx2 (Revision 2). EFSA Supporting Publications 2015; 12: 804E. 88 European Food Safety Authority, Evidence Management Unit. Dietary Exposure (DietEx) Tool. 2022.https://www.efsa.europa.eu/sites/default/files/2021-08/dietex-features-instructions.pdf (accessed 13 May2025). State General Laboratory of Cyprus. ImproRisk model, an open access risk assessment tool. 2022.https://efsa.onlinelibrary.wiley.com/doi/pdf/10.2903/fr.efsa.2024.FR-0037 (accessed 13 May2025). D’Addezio L, Sette S, Piccinelli R, Le Donne C, Turrini A. FoodEx2 Harmonization of the Food Consumption Database from the Italian IV SCAI Children’s Survey. Nutrients 2024; 16: 1065. Dooley DM, Griffiths EJ, Gosal GS, Buttigieg PL, Hoehndorf R, Lange MC et al. FoodOn: a harmonized food ontology to increase global food traceability, quality control and data integration. NPJ Sci Food 2018; 2: 23. U.S. Department of Agriculture, Agricultural Research Service. FoodData Central. 2019.fdc.nal.usda.gov. Bandrowski A, Brinkman R, Brochhausen M, Brush MH, Bug B, Chibucos MC et al. The Ontology for Biomedical Investigations. PLoS One 2016; 11: e0154556. Menichetti G, Ravandi B, Mozaffarian D, Barabási A-L. Machine learning prediction of the degree of food processing. Nat Commun 2023; 14: 2312. Kretser A, Murphy D, Starke-Reed P. A partnership for public health: USDA branded food products database. Journal of Food Composition and Analysis 2017; 64: 10–12. Harvard T.H. Chan School of Public Health. Nutrition Questionnaire Service Center. 2022.https://hsph.harvard.edu/department/nutrition/nutrition-questionnaire-service center/#nutrient-data (accessed 13 May2025). U.S. Food and Drug Administration. Guidance for Industry: Food Labeling Guide. 2013.https://www.fda.gov/regulatory-information/search-fda-guidance-documents/guidanceindustry-food-labeling-guide (accessed 13 May2025). Jeong H, Tombor B, Albert R, Oltvai ZN, Barabási A-L. The large-scale organization of metabolic networks. Nature 2000; 407: 651–654. Bar-Even A, Noor E, Flamholz A, Buescher JM, Milo R. Hydrophobicity and charge shape cellular metabolite concentrations. PLoS Comput Biol 2011; 7. doi:10.1371/journal.pcbi.1002166. Menichetti G, Barabási A-L. Nutrient concentrations in food display universal behaviour. Nat Food 2022; 3: 375–382. Menichetti G, Barabási A-L, Loscalzo J. Decoding the Foodome: Molecular Networks Connecting Diet and Health. Annu Rev Nutr 2024; 44: 257–288. Martínez Steele E, Baraldi LG, Louzada ML da C, Moubarac J-C, Mozaffarian D, Monteiro CA. Ultra-processed foods and added sugars in the US diet: evidence from a nationally representative cross-sectional study. BMJ Open 2016; 6: e009892. Khandpur N, Rossato S, Drouin-Chartier J-P, Du M, Steele EM, Sampson L et al. Categorising ultra-processed foods in large-scale cohort studies: evidence from the Nurses’ Health Studies, the Health Professionals Follow-up Study, and the Growing Up Today Study. J Nutr Sci 2021; 10: e77. GS1 UK, Cranfield University School of Management. Data Crunch Report: The Impact of Bad Data on Profits and Customer Service in the UK Grocery Industry. 2009.https://dspace.lib.cranfield.ac.uk/bitstream/handle/1826/4135/Data_crunch_report.pdf (accessed 3 Apr2025). European Commission. Nutrition labelling. 2024.https://food.ec.europa.eu/food-safety/labellingand-nutrition/food-information-consumers-legislation/nutrition-labelling_en (accessed 18 Mar2025). 106 Food and Agriculture Organization of the United Nations. International Network of Food Data Systems (INFOODS). 2022.https://www.fao.org/infoods/infoods/tables-anddatabases/faoinfoods-databases/en/. 107 Hu G, Flexner N, Tiscornia MV, L’Abbé MR. Accelerating the Classification of NOVA Food Processing Levels Using a Fine-Tuned Language Model: A Multi-Country Study. Nutrients 2023; 15: 4167. 108 Lim J, An S, Woo G, Kim C, Jeon J-J. Context-Driven Missing Data Imputation via Large Language Model. 2025.https://openreview.net/forum?id $\circleddash$ b2oLgk5XRE. 109 Open Food Facts. Open Food Facts. 2025.https://world.openfoodfacts.org (accessed 5 Mar2025). 110 Sarda B, Kesse-Guyot E, Deschamps V, Ducrot P, Galan P, Hercberg S et al. Complementarity between the updated version of the front-of-pack nutrition label Nutri-Score and the foodprocessing NOVA classification. Public Health Nutr 2024; 27: e63. 111 Pollan M. Food Rules: An Eater’s Manual. Penguin Publishing Group: New York City, U.S., 2009. 112 Devlin J, Chang M-W, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: Proceedings of the 2019 Conference of the North. Association for Computational Linguistics: Stroudsburg, PA, USA, 2019, pp 4171–4186. 113 Lee J, Yoon W, Kim S, Kim D, Kim S, So CH et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 2020; 36: 1234–1240. 114 Bose P, Srinivasan S, Sleeman WC, Palta J, Kapoor R, Ghosh P. A Survey on Recent Named Entity Recognition and Relationship Extraction Techniques on Clinical Texts. Applied Sciences 2021; 11: 8319. 115 Elbiach O, Grissette H, Nfaoui EH. Leveraging Transformer Models for Enhanced Pharmacovigilance: A Comparative Analysis of ADR Extraction from Biomedical and Social Media Texts. AI 2025; 6: 31. 116 Bhasuran B. BioBERT and Similar Approaches for Relation Extraction. 2022, pp 221–235. 117 Pellegrini C, Özsoy E, Wintergerst M, Groh G. Exploiting Food Embeddings for Ingredient Substitution. In: Proceedings of the 14th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2021). SCITEPRESS, 2021, pp 67–77. 118 Lo FP-W, Qiu J, Wang Z, Chen J, Xiao B, Yuan W et al. Dietary Assessment with Multimodal ChatGPT: A Systematic Analysis. ArXiv. 2023. doi:https://doi.org/10.48550/arXiv.2312.08592. 119 What We Eat In America (WWEIA) Database. 2024.https://data.nal.usda.gov/dataset/what-weeat-america-wweia-database (accessed 5 Mar2025).
This chapter explores the evolution, classification, and health implications of food processing, while emphasizing the transformative role of machine learning, artificial intelligence (AI), and data science in advancing food informatics. It begins with a historical overview and a critical review of traditional classification frameworks such as NOVA, Nutri-Score, and SIGA, highlighting their strengths and limitations, particularly the subjectivity and reproducibility challenges that hinder epidemiological research and public policy. To address these issues, the chapter presents novel computational approaches, including FoodProX, a random forest model trained on nutrient composition data to infer processing levels and generate a continuous FPro score. It also explores how large language models like BERT and BioBERT can semantically embed food descriptions and ingredient lists for predictive tasks, even in the presence of missing data. A key contribution of the chapter is a novel case study using the Open Food Facts database, showcasing how multimodal AI models can integrate structured and unstructured data to classify foods at scale, offering a new paradigm for food processing assessment in public health and research.
[ "cs.CL", "cs.AI", "cs.CY", "cs.DB", "cs.LG" ]
# 1 Introduction Large Language Models (LLMs) have demonstrated remarkable success across a wide range of natural language processing (NLP) tasks (Brown et al. (2020), Chowdhery et al. (2023), Touvron et al. (2023a), Ouyang et al. (2022)), including question answering (Kamalloo et al., 2023), summarization (Liu et al., 2024), and machine translation (Zhang et al., 2023). As LLMs have grown in size and have been trained on increasingly diverse and large datasets, their emergent ability $Q$ :  When did the team that Michael's best friend support last win the Championship ? $q _ { 1 }$ : Who is Michael's best friend ? $q _ { 2 } \mathrm { ~ : ~ }$ : What team does he support ? $q _ { 3 }$ : When did that team last win the Championship ? Figure 1: Illustration of our decompositional retrievalbased reasoning method. Our method decomposes the question into sub-questions, performs iterative, contextaware retrieval conditioned on previous answers, and merges the resulting subgraphs for guided reasoning. to perform different types of reasoning (Wei et al. (2022a), Zhou et al. (2023)), ranging from arithmetic (Imani et al., 2023) and neurosymbolic reasoning (Fang et al., 2024) to commonsense inference (Zhao et al., 2023), has become a central focus of recent research. This has opened new possibilities for solving complex problems that traditionally required structured or symbolic approaches (Pan et al. (2023), He-Yueya et al. (2023)). However, despite their broad capabilities, LLMs still struggle with tasks requiring multi-hop reasoning (Yang et al., 2024), factual grounding, or explicit access to structured knowledge. These models are prone to hallucinations and logical inconsistencies, particularly when operating in knowledge-intensive domains (Ji et al. (2023b), Huang et al. (2025). This is partially due to the high reliance on implicit knowledge stored in parameters (Hu et al., 2024b), and the lack of explicit mechanisms for integrating or reasoning over structured information. Recent work on retrieval-augmented generation (Lewis et al., 2020), graph-augmented LLMs (Yasunaga et al., 2021), and neurosymbolic reasoning (Fang et al., 2024) has aimed to bridge this gap. In this work, we address these issues by proposing a method for knowledge-guided decompositional reasoning with LLMs. Our method injects structured knowledge into the reasoning process using textualized knowledge graphs. Specifically, we decompose complex questions into sub-questions, retrieve relevant subgraphs from a textual knowledge graph, and merge them for structured and reasoning-enhanced retrieval (Figure 1). The obtained graph is then used to guide LLMs toward generating more accurate and interpretable answers. This hybrid approach enhances both the factual correctness and the transparency of LLM predictions, particularly in settings requiring multi-step reasoning over domain knowledge. To verify the effectiveness of our approach, we conduct extensive experiments on benchmark datasets for complex question answering, namely CWQ (Talmor and Berant, 2018) and WebQSP (Yih et al., 2016). We compare our method against standard prompting techniques, as well as existing state-of-the-art approaches that combine LLMs with knowledge graphs. Results show that our method achieves consistent improvements in accuracy without increasing the number of parameters for the LLM or the number of LLM calls, which shows the efficiency of our method. Our contributions are as follows: • We develop a novel knowledge graph retrieval method that uses query decomposition, helping the LLM to reason over structured data for complex questions. • We introduce a hybrid similarity function that uses the complex question and its decomposition to guide the retrieval process. • We demonstrate improvements in accuracy and factual consistency on multi-hop QA benchmarks. • Our method reduces the number of LLM-calls compared to other baselines, achieving a $3 \times$ to $5 \times$ reduction for both datasets. # 2 Background # 2.1 Can LLMs reason ? LLMs such as GPT-3 (Brown et al., 2020), PaLM (Chowdhery et al., 2023), and LLaMA (Touvron et al., 2023a) have demonstrated strong performance across a wide range of language tasks, including reasoning-based benchmarks. Their ability to generalize in zero-shot (Kojima et al., 2022) and few-shot settings has led to the emergence of new prompting techniques, such as Chain-of-Thought (CoT) reasoning (Wei et al., 2022b), which improves multi-step reasoning by encouraging models to generate intermediate reasoning steps. Variants like self-consistency (Wang et al., 2023) further refines this by sampling multiple reasoning paths and aggregating answers for improved robustness. More recently, reinforcement learning has been used to entirely train new models (DeepSeek-AI et al., 2025) or improve model prompting (Pternea et al., 2024), showing great potential for the future. Despite these advances, LLMs remain prone to hallucinations—generating fluent but factually incorrect or logically inconsistent outputs (Huang et al., 2025), (Srivastava et al., 2023), (Ji et al., 2023b). This is especially problematic in knowledge-intensive tasks requiring factual grounding, multi-hop reasoning, or domain-specific expertise (Ji et al. (2023a), Opsahl (2024)). These issues stem in part from the implicit nature of knowledge storage in model parameters, which limits their ability to verify facts or reason explicitly over external knowledge (Petroni et al. (2019), Bommasani et al. (2021)). Recent work has explored augmenting LLMs with tool use, such as code interpreters (Pi et al., 2022), equation solvers (He-Yueya et al., 2023) or symbolic solvers (Lam et al., 2024) (Pan et al., 2023), to externalize and validate parts of the reasoning process. # 2.2 LLMs and graphs Graphs offer a natural and interpretable way to represent real-world data through entities and their structured relationships. Integrating knowledge graphs with Large Language Models (LLMs) is a promising research direction that enables models to better handle real-life scenarios with structured data (Li et al., 2024) (Hu et al., 2024a). Knowledge graphs can enhance LLMs by providing explicit, grounded context, which helps mitigate hallucinations (Li et al., 2024) (Agrawal et al., 2024), but also makes the model dependent on the noise or incompleteness of the graph (Dong et al., 2025). By grounding the generation process in a textualized or symbolic knowledge graph, LLMs can produce responses that are more accurate and aligned with real-world facts. This is especially useful in tasks such as question answering (Baek et al., 2023) (Yasunaga et al., 2021), logical reasoning (Choudhary and Reddy, 2024), or dialogue systems (Kang et al., 2023) where factual precision is crucial. Figure 2: Overview of our retrieval method: the complex question is first decomposed into smaller subquestions, for which we iteratively perform retrieval and answer generation; once the retrieval is done for all subquestions, we merge the subgraphs and give the result as a hard (textualized graph) and a soft prompt (graph encoder output) to the model. LLMs and graph neural networks (GNNs) can also be used together (Xu et al., 2024) (He et al., 2024a), each complementing the other. Graphs can be used to inject knowledge into LLMs via methods like structured prompting (Baek et al., 2023) (Zhang et al., 2024) or retrieval-based augmentation (Lewis et al., 2020) (Peng et al., 2024). LLMs can support and enhance graph-centred tasks (Pan et al., 2024) by performing entity linking, relation extraction, or even link prediction (Shu et al., 2025), which largely improves the graph’s coverage. LLMs have also been explored as generators of graph-structured outputs or as interpretable reasoning agents over graphs using intermediate symbolic steps. In such hybrid frameworks, LLMs benefit from the structure and factual reliability of graphs, while graphs gain from the generalization and language understanding ability of LLMs (Pan et al., 2024). Nonetheless, most existing methods remain heuristic and lack a principled understanding of how best to align symbolic and neural representations (Cheng et al., 2025). # 3 Related Work Different methods have already demonstrated promising results on Knowledge Graph Question Answering (KGQA) tasks. He et al. (2024b) retrieves a subgraph from a textual knowledge graph and feeds it to the LLM without any explicit reasoning step, which can hinder performance on complex questions. Other existing techniques introduce some reasoning mechanisms within their framework: Sun et al. (2024) performs iterative entity and relation explorations, and directly reasons on the obtained paths. Similarly, Chen et al. (2024) uses task decomposition and then performs multiple cycles of exploration, memory updating, and evaluation. Performing iterative calls to the LLM has many advantages, but both mentioned techniques require using a relatively large model (LLaMa-2- 70B, GPT-3.5 / GPT-4...) for planning and evaluation. In contrast, our method focuses on retrieving a more pertinent graph rather than answering iteratively, and uses fewer LLM calls—which can be controlled by the number of generated subquestions. Other methods like Luo et al. (2024) first predict reasoning paths as plans and search for those paths within the knowledge graph. Given that the LLM does not have any prior knowledge of the relations within the knowledge graph, the technique requires knowledge distillation into the LLM to generate faithful relation paths. Our method does not require any fine-tuning on the LLM, which reduces the cost of usage and the preprocessing time for new datasets. # 4 Method The overall pipeline for our method is presented in Figure 2. In order to tackle complex questions, we first decompose a complex question into a set of logically ordered subquestions. We then perform an iterative retrieval cycle by performing retrieval on the graph for each subquestion that we obtain. The results for the multiple retrievals are then combined into a single graph, which is then used to generate the final answer to the complex question. # 4.1 Subquestions Generation Given a complex question $Q$ , we want to obtain a set of subquestions $\{ q _ { 1 } , . . . , q _ { n } \}$ . The subquestions must be logically ordered (answering $q _ { 1 }$ is necessary to answer $q _ { 2 }$ , etc.), atomic (can not be split into smaller subquestions), and cover all aspects of the complex question. Therefore, answering all subquestions in the given order should be equivalent to answering the complex question. In our work, we generate the subquestions using an LLM, leveraging its semantic understanding and its implicit knowledge capabilities. Using an LLM provides a flexible framework for decomposing complex questions, independent of the domain or the question type. To fulfill all the mentioned conditions above, we prompt the model with specific instructions about subquestions; we also provide some manually generated examples of decomposition to guide the model’s behavior (see Appendix B for details about prompting). # 4.2 Hybrid Entity Retrieval For the retrieval part of the generation pipeline, we want to obtain a subgraph for each subquestion. Considering each subquestion independently might lead to very distinct subgraphs; moreover, the subquestions can lack sufficient contextual information on their own to retrieve all the relevant nodes and edges from the knowledge graph. To address this, we introduce a hybrid retrieval method that combines both the subquestion and the original complex question, allowing the model to benefit from the specificity of the former and retain the broader context provided by the latter. Our hybrid retrieval mechanism is implemented through a weighted similarity function, controlled by a parameter $\alpha$ , which balances the influence of both components. Figure 3 presents the equations for both node and edge retrieval. When performing retrieval on the graph for the subquestion $q _ { i }$ , we keep track of the previous answer (answer $a _ { i - 1 }$ to subquestion $q _ { i - 1 } \dot { } \dot { }$ ). This is crucial, as the answer to $q _ { i }$ might depend on the answer to $q _ { i - 1 }$ . Before retrieval, we embedded our complex question, the subquestions, and the textual attributes of the nodes/edges in the graph using a Sentence Transformer embedding model (see Appendix B for details). After having retrieved all necessary nodes and edges, we build a connected subgraph from these elements, following work done in He et al. (2024b). The connectivity of the graph is enforced by the Prize-Collecting Steiner Tree (PCST) algorithm (Bienstock et al., 1993), which optimizes the selection of a subgraph of maximum value based on node/edge weights and query similarity, under a size constraint. $$ \begin{array} { r } { { V _ { k } } ^ { i } = \underset { n \in V } { \mathrm { a r g t o p k } } \left[ \alpha \cos ( z _ { i } , z _ { n } ) + ( 1 - \alpha ) \cos ( z _ { Q } , z _ { n } ) \right] } \\ { \mathrm { ~ } } \\ { { E _ { k } } ^ { i } = \underset { e \in E } { \mathrm { a r g t o p k } } \left[ \alpha \cos ( z _ { i } , z _ { e } ) + ( 1 - \alpha ) \cos ( z _ { Q } , z _ { e } ) \right] } \end{array} $$ # 4.3 Subgraphs Merging After retrieving subgraphs corresponding to each subquestion, we proceed to merge them in order to link relevant information and remove redundancy. Each subgraph is initially connected, as it is constructed using the PCST algorithm. To form the final graph, we take the union of all distinct nodes and edges across all subgraphs. Importantly, we do not directly enforce full connectivity during this merging step, as doing so would require introducing virtual edges, which could potentially compromise the semantic integrity of the graph or resort to computationally expensive graph expansion methods. Figure 4: Model Accuracy $( \mathrm { H i t } @ 1 )$ against the value of the $\alpha$ parameter for both CWQ and WebQSP datasets. # 4.4 Answer Generation Once we obtained the merged graph for the complex question, we pass it to our LLM in two different ways, following the generation process described in He et al. (2024b): we provide a textualized version of the graph in the prompt, and also pass the graph through a trained graph encoder (Shi et al., 2021) followed by a linear projection layer. Providing the encoded graph as a soft prompt guides the LLM’s response by feeding a trained embedding vector to the self-attention layers of the language model. When answering the complex question, we include the merged graph and its textual description in the prompt; we chose not to include the answers to the subquestions in the final prompt, as a single prior error can force the model to give a wrong answer, even when the graph contains the correct answer. # 5 Experiments # 5.1 Benchmarks We evaluate our method on two different QuestionAnswering (QA) benchmarks to assess the quality of our results: CWQ (ComplexWebQuestions) (Yih et al., 2016) and WebQSP (WebQuestionsSemanticParses) (Talmor and Berant, 2018), which are both based on the Freebase (Bollacker et al., 2008) knowledge base. CWQ is a complex QA benchmark that focuses on multi-hop questions. As it needs the integration of multiple facts, it benefits from compositional reasoning, making it a suitable benchmark for our approach. WebQSP, on the other hand, contains a wide range of simple and factual questions. It also includes SPARQL annotations that we do not use in this work. We use the preprocessed version of the dataset provided in Luo et al. (2024). # 5.2 Evaluation Metrics We use the standard QA evaluation metrics found in related work. We report performance using accuracy and F1 scores. Accuracy measures exact matches, while F1 allows a more nuanced evaluation, especially when predictions are partially correct. In line with previous studies (Chen et al. (2024), Sun et al. (2024), Luo et al. (2024)), we use $\mathrm { H i t } @ 1$ as our primary accuracy metric. $\mathrm { H i t } @ 1$ determines whether the top prediction matches the ground truth and is widely used in QA evaluation. We report both $\operatorname { H i t } @ 1$ and F1, enabling direct comparison with prior work. # 5.3 Choice of language models Our method requires two distinct capabilities, each handled by different classes of models. First, strong decompositional reasoning is needed to break down the complex question into logically ordered, comprehensive, and atomic subquestions. There, we use a Qwen-32B model distilled from Deepseek-R1 (DeepSeek-AI et al., 2025) for its advanced reasoning abilities. Second, we need efficient models to answer the subquestions and generate the final answer. For this, we experiment both with LLaMA-2- 7B and LLaMA-2-13B (Touvron et al., 2023b). We also propose a "Hybrid 7B/13B" setting in which the 7B model answers the subquestions, while the 13B model handles the final complex question answer. The rationale is that atomic subquestions are simple and can be handled by a smaller model, while the final answer—requiring the integration of the full merged graph—benefits from the greater capacity of a larger model. This setting leverages model efficiency by allocating larger capacity only where necessary. We evaluate both uniform and hybrid settings in Section 6. # 5.4 Balancing the Retrieval Query Using only an atomic subquestion for retrieval can lead to ineffective results, as it lacks the broader context of the original complex question. To address this, we propose balancing the influence of the complex question and the current subquestion in the retrieval query embedding. We introduce an $\alpha$ parameter (Section 4.2) that controls this tradeoff via a weighted average of their respective query embeddings. As shown in Figure 3, $\alpha$ determines the contribution of each: lower values (close to 0) emphasize the subquestion, while higher values shift focus toward the original complex question. When $\alpha = 1$ , retrieval is based solely on the complex question, without any decompositional reasoning, as in He et al. (2024b) (see Figure 4). # 6 Results # 6.1 Influence of $\alpha$ parameter During retrieval, we use both the complex question and its subquestions, with the $\alpha$ parameter controlling their relative importance in the query (Figure 3). We vary $\alpha$ and report model accuracy in Figure 4. We observe that using a larger model (13B) in the final answer stage (7B/13B and 13B setups) significantly outperforms the 7B-only setup, however using such a model for answering the subquestions offers no clear benefit, as we see, the hybrid 7B/13B and 13B-only setups yield similar results. Across all setups, extreme $\alpha$ values (near 0 or 1) underperform, and intermediate values work best. This supports the need to balance focus between subquestions and the main question during retrieval. In the rest of the paper, we use $\alpha = 0 . 7$ . Varying $\alpha$ also impacts the structure of the retrieved graph, potentially affecting the connectivity constraint previously ensured for subgraphs by the PCST algorithm. Higher $\alpha$ leads to more connected and denser merged graphs (Figures 9, Figure 5: Model Accuracy $( \mathrm { H i t } @ 1 )$ for connected and disconnected graphs against the value of the $\alpha$ parameter for the CWQ dataset. Figure 6: Exact Matching against the value of the $\alpha$ parameter, for the CWQ benchmark. 10), while lower values produce more distinct subgraphs and a sparser, occasionally disconnected graph. Although we observe connected graphs empirically yield better performance (Figure 5), disconnected ones remain rare in proportion (Figure 9). Despite the drop in performance in these cases, results remain competitive with state-of-the-art methods (Table 1). We discuss the statistical significance of these results in Appendix A. Since our focus is on improving retrieval, we report the Exact Matching scores for different $\alpha$ values. Exact Matching score is defined as the percentage of graphs containing a node that exactly matches the answer label. We observe in Figure 6 that focusing on the subquestions rather leads to Exact Matching scores. Setting $\alpha = 1$ serves as a sanity check to verify that we obtain similar metrics to He et al. (2024b). Additional results for Exact Matching can be found in Appendix A. We observe similar results for the Matching score: we define this metric as the percentage of retrieved graphs that contain a node very similar to the answer label (based on a cosine similarity between embeddings, using a similarity threshold of 0.9). This more flexible metric allows to check the presence of highly related nodes in the retrieved graph. # 6.2 Graph size $K _ { n }$ and $K _ { e }$ correspond respectively to the number of relevant nodes and edges that we consider to build the connected subgraph with PCST. At the retrieval step, we set the values of $K _ { n }$ and $K _ { e }$ to extract a certain number of relevant entities in the original graph (Figure 3). Choosing higher values of $K _ { n }$ and $K _ { e }$ leads to a higher quantity of retrieved information, which improves the probability of retrieved relevant nodes and edges, but also increases potential noise in the subgraph that we are building. Logically, choosing higher values of $K _ { n }$ produces significantly larger graphs (Figure 12 in Appendix A), which can be harder to handle for the LLM. Figure 7: Accuracy $( \mathrm { H i t } @ 1 )$ against the value of the $K _ { n }$ parameter (retrieved nodes), for the CWQ benchmark. Those results were obtained for our 7B model. We also show (see Figure 7) that setting a high value for $K _ { n }$ (or $K _ { e }$ ) does not lead to better performances for our 7B model. This observation has also been made in He et al. (2024b) on the WebQSP dataset. Setting $K _ { n }$ too low (e.g. $K _ { n } \ = \ 2 )$ does not allow the model to retrieve enough knowledge in the graph; but setting $K _ { n }$ too high (above 5 empirically for our method) will add noise (non-relevant nodes) to the retrieved subgraphs and can disrupt the correctness of the generated answers. If we use a larger model (see Figure 8: Model Accuracy $( \mathrm { H i t } @ 1 )$ for different model sizes, for the CWQ benchmark. In default setting, we use $K _ { n } \ = \ 3$ , $K _ { e } ~ = ~ 5$ ; for "larger graphs", we use $K _ { n } = 5$ , $K _ { e } = 7$ . Figure 8), the difference between using $K _ { n } = 3$ , $K _ { e } = 5 ^ { \cdot }$ ) and $K _ { n } = 5$ , $K _ { e } = 7$ ( denoted as "larger graphs") does not lead to significant improvement; although we have a higher change of retrieving important nodes and edges, this observation highlights the presence of noise within the larger retrieved graphs. For the evaluation of our method, we use the default values of $K _ { n } = 3$ and $K _ { e } = 5$ . # 6.3 Main Results For our main evaluation, we consider various baselines and model configurations. In particular, we highlight the "Hybrid 7B/13B" setting, where a 7B model answers each subquestion and a 13B model handles final answer generation (as described in Section 5.3). Across both CWQ and WebQSP benchmarks (Table 1), our method achieves strong performance compared to approaches using similar model sizes and no fine-tuning. On CWQ, which features multi-hop questions, we observe a significant improvement over prior non-finetuned baselines, including those using larger models like Sun et al. (2024) (70B) and He et al. (2024b) (13B). On WebQSP, a simpler QA dataset, our method still outperforms related methods, though the margin is smaller—likely because decomposition is less helpful for single-hop questions. In both cases, only methods relying on dataset-specific fine-tuning or very large models (e.g., GPT-3.5 in (Chen et al., 2024)) achieve better scores, highlighting the value of simple decompositional reasoning at the retrieval stage. A key observation is that our "Hybrid 7B/13B" setup performs comparably to a full 13B pipeline, suggesting that most of the benefits come from decompositional retrieval, not simply model scale. Figure 8 highlights this efficiency: we maintain competitive performance while using fewer resources, by relying on a lightweight model for subquestions and a larger one only for the final answer. Table 1: Performance comparison on the CWQ and WebQSP benchmarks. Bold indicates best results; underlined values indicate second-best. Results are sourced from the original papers: Brown et al. (2020), Wei et al. (2022b), Jiang et al. (2023), Sun et al. (2024), Luo et al. (2024), Chen et al. (2024), He et al. (2024b). Finally, Table 2 compares the average number of LLM-calls for our method and compares it with baselines (Sun et al. (2024), Chen et al. (2024)) that made this data available. These methods use iterative cycles to answer the complex question, which does not give any upper-bound for the number of calls to the model. In our case, the number of calls to the model directly depends on the number of generated subquestions, which can ultimately be controlled via prompting at the decomposition step. We achieve state-of-the-art accuracy while reducing LLM usage for both CWQ and WebQSP, showing the efficiency of our decompositional retrieval method. Since we use a single LLM call for both decomposition and final answer generation, we can deduce the average number of subquestions generated. Without setting a limit on the number of subquestions, we obtained an average of 2.8 subquestions for CWQ and 2.3 for WebQSP; this demonstrates that more complex questions result in more subquestions. Table 2: Average number of LLM calls per question on the CWQ and WebQSP datasets
Large Language Models (LLMs) excel at many NLP tasks, but struggle with multi-hop reasoning and factual consistency, limiting their effectiveness on knowledge-intensive tasks like complex question answering (QA). Linking Knowledge Graphs (KG) and LLMs has shown promising results, but LLMs generally lack the ability to reason efficiently over graph-structured information. To tackle this problem, we propose a novel retrieval approach that integrates textual knowledge graphs into the LLM reasoning process via query decomposition. Our method decomposes complex questions into sub-questions, retrieves relevant textual subgraphs, and composes a question-specific knowledge graph to guide answer generation. For that, we use a weighted similarity function that focuses on both the complex question and the generated subquestions to extract a relevant subgraph, which allows efficient and precise retrieval for complex questions and improves the performance of LLMs on multi-hop QA tasks. This structured reasoning pipeline enhances factual grounding and interpretability while leveraging the generative strengths of LLMs. We evaluate our method on standard multi-hop QA benchmarks and show that it achieves comparable or superior performance to competitive existing methods, using smaller models and fewer LLM calls.
[ "cs.CL", "cs.IR", "cs.LG" ]
# 1. Introduction In recent years, speech synthesis technologies have achieved remarkable progress, enabling the generation of increasingly more natural and convincing synthetic voices. While these advancements in text-to-speech (TTS) and voice conversion (VC) systems demonstrate the potential of conversational AI applications in human-computer interaction, they also raise significant security concerns in biometric authentication systems, particularly in speaker verification applications [1, 2, 3]. Therefore, it is crucial to develop effective and robust systems for synthetic speech detection (SSD) as a means of protection [4]. Recent state-of-the-art SSD systems leverage Selfsupervised learning (SSL) as input features to sequence-tosequence architectures for synthetic speech classification. SSL models can learn powerful representations by being trained on a large amount of unlabeled data. As a result, SSL models can be applied very effectively to various downstream tasks (e.g., speaker verification, emotion recognition, automatic speech recognition, etc.) [5, 6, 7, 8, 9, 10, 11, 12] when fine-tuned on a limited amount of labelled data. In synthetic speech detection, SSL models can be used as feature extractors, followed by a projection with Multi-Layer Perceptrons (MLPs) and a combination with other architectures such as Conformer [13] or AASIST [14] for SSD and deepfake detection tasks [5, 15]. For a long time, MLPs have been an indispensable component in deep learning models, particularly as fundamental architectures for models like Transformers [16] or Convolutional Neural Networks (CNNs) [17]. However, MLPs are often challenged by high dimensional data like images [18] owing to their lack of capacity to inherently capture spatial patterns in the data, resulting in scalability and efficiency concerns and may lead to suboptimal performance. Given that speech signals are mostly represented in raw waveforms, spectrograms or deep embeddings of high dimensionalities, the incorporation of MLP may result in suboptimal performance. The recent emergence of KANs [19], a class of neural networks inspired by the Kolmogorov-Arnold representation theorem, introduces an innovation to the neural networks by employing learnable activation functions. The KolmogorovArnold representation theorem assumes that any multivariate continuous function can be expressed as a sum of continuous functions of a single variable. Based on that, KANs replace the fixed activation functions in MLPs with learnable univariate functions, enabling a more flexible and interpretable framework for function approximation [20]. Recent works adapting KANs also claimed that this new architecture could address various intrinsic limitations of MLPs, especially in handling complex functional mapping in high-dimensional spaces [19, 21, 22], particularly promising for speech data. In this paper, we propose a novel system enhanced with KAN architecture to bridge the gap on various SSD benchmarks. We employ the pre-trained XLS-R model and Conformer encoders, enhanced with KAN architecture to approximate the high-dimensional features for a better capability of detecting artifacts in synthetic speech. Our model achieves the new state-of-the-art (SOTA) results of $0 . 8 0 \%$ and $0 . 7 0 \%$ on the ASVspoof2021 LA set under fixed- and variable-length conditions respectively, while remains on par with other competitive systems. The subsequent sections in this paper are organized as follows: Section 2 presents the theoretical formulation of KANs and their extension, Section 3 describes the methods employed in this paper, and Section 4 discusses the experimental implementations and results to compare our model with other SOTA models. Section 5 concludes our contributions in this paper. # 2. Theoretical Formulation # 2.1. Multi-Layer Perceptrons (MLPs) A Multi-Layer Perceptron (MLP) is a fully connected feedforward neural network consisting of multiple layers of neurons. Each neuron in a layer is connected to every node in the following layer, and the node applies a nonlinear activation function to the weighted sum of its inputs. The foundation of MLP is supported by the Universal Approximation Theorem [23]. The theorem states that a feedforward network with a single hidden layer containing a finite number of neurons can approximate any continuous function on compact subsets of $\mathbb { R } ^ { n }$ , given appropriate activation functions. # 2.2. Kolmogorov-Arnold networks (KANs) The Kolmogorov-Arnold representation theorem states that any continuous multivariate function can be represented as a sum of univariate functions. More specifically, given $\mathbf { x }$ a vector of dimension $n , \ f$ a multivariate continuous function such that: $f : [ 0 , 1 ] ^ { n } \to \mathbb { R }$ , it can be expressed that: $$ f ( { \mathbf { x } } ) = \sum _ { q = 1 } ^ { 2 n + 1 } \Phi _ { q } ( \sum _ { p = 1 } ^ { n } \phi _ { q , p } ( x _ { p } ) ) , $$ where $\phi _ { q , p } : [ 0 , 1 ] \ \to \ \mathbb { R }$ and $\Phi _ { q } : \mathbb { R } \mathbb { R }$ are univariate functions. Equation 1 demonstrates that the representation and computation of complex functions can be significantly simplified. This characteristic has motivated neural network topologies, particularly in the domains of function approximation and dimensionality reduction, and offers theoretical support for high-dimensional data modelling. Leveraging this theorem, [19] introduces a generalized KANs layer to learn activation functions, which are univariate functions on edge. Officially, a KANs layer with $d _ { i n }$ - dimensional inputs and $d _ { o u t }$ -dimensional outputs is described in Equation 2. $$ f ( \mathbf { x } ) = \Phi \circ x = [ \sum _ { i = 1 } ^ { d _ { i n } } \phi _ { i , 1 } ( x _ { i } ) , . . . , \sum _ { i = 1 } ^ { d _ { i n } } \phi _ { i , d _ { o u t } } ( x _ { i } ) ] , $$ where $\phi _ { i , j }$ is a univariate transformation, $\Phi$ captures the combined mapping across the input dimensions. In KANs, each learnable univariate function $\phi _ { q , p }$ can be defined as a B-spline: $$ \mathrm { s p l i n e } ( x ) = \sum _ { i } c _ { i } B _ { i } ( x ) . $$ The activation function $\phi ( x )$ is a weighted combination of a basis function $b ( \boldsymbol { x } )$ with the B-spline. Given $\mathrm { S i L U } ( x ) \ =$ 1+ex−x , wb and ws are corresponding weights of the basis function and spline function, the activation function can be expressed as: $$ \phi ( x ) = w _ { b } \cdot \mathrm { S i L U } ( x ) + w _ { s } \cdot \mathrm { s p l i n e } ( x ) , $$ Equation 5 illustrates the general architecture of an $L$ -layer KAN: $$ \begin{array} { r } { \mathrm { K A N } ( \mathrm { x } ) = ( \Phi _ { L - 1 } \circ \Phi _ { L - 2 } \circ \dots \circ \Phi _ { 0 } ) ( x ) . } \end{array} $$ # 2.3. Group Rational Kolmogorov-Arnold networks (GRKANs) Yang et al. [24] suggests replacing B-spline with a rational function due to the limitations of standard KANs. GR-KAN divides the $I$ input channels into $k$ groups and shares the parameters of the rational functions across all channels within the same group. Equation 6 describes the formula of a GR-KAN layer applied with input vector $x$ , where $\phi$ is now a rational function, $I / k$ represents the dimension size of each group, $O$ is the dimension of the output vector, and $w$ refers to unique scalars. $$ \begin{array} { r l } & { L ( \mathbf { x } ) = \Phi \circ \mathbf { x } } \\ & { \qquad = \left[ \sum _ { i = 1 } ^ { I } w _ { i , 1 } \phi _ { \lfloor \frac { i } { I / k } \rfloor } ( x _ { i } ) \quad \cdots \quad \sum _ { i = 1 } ^ { I } w _ { i , O } \phi _ { \lfloor \frac { i } { I / k } \rfloor } ( x _ { i } ) \right] } \end{array} $$ The key improvement that GR-KAN introduces over KAN lies in its weight initialization strategy, which aims to ensure a variance-preserving effect across the network. This initialization helps prevent the increase or decrease in activation magnitudes throughout the layers, thereby promoting network stability. Additionally, GR-KAN weights can be seamlessly loaded with the weights from an MLP’s linear layer, as the GR-KAN layer integrates both a linear layer and a group-wise rational layer. # 3. Proposed Methodology # 3.1. Baseline Model Architectures We use two state-of-the-art architectures as our baseline, XLSR-Conformer [6] and its variant XLSR-Conformer+TCM with an additional temporal-channel dependency modelling (TCM) module. As can be seen in Figure 1, the XLSRConformer baseline comprises two main parts: (i) the pretrained XLS-R [25], which is a variant of the wav2vec 2.0 [26] model, utilised as the feature extractor to capture contextualised representations from the high-dimensional speech signal; and (ii) the Conformer Encoder. Given an input speech signal $O$ , the $T$ -length output SSL features are denoted as $\bar { X ^ { } } = \mathrm { S S L } ( O ) = ( \check { x _ { t } } \in \mathbb { R } ^ { \hat { D } } | t = 1 , . . . , T )$ , with $D$ being the output dimension of the SSL model. The extracted features $X$ are projected to a lower dimension by an MLP with SeLU activation function before being fed to the Conformer Encoder. Out projected features are denoted as $\tilde { X } = \operatorname { S e L U } ( \operatorname { L i n e a r } ( X ) )$ , where $\tilde { { \cal X } } = ( \tilde { x } _ { t } \in \mathbb { R } ^ { D ^ { \prime } } | t = 1 , . . . , T )$ . The Conformer Encoder is a stack of $L$ Conformer blocks, each containing a Multi-Head Self-Attention (MHSA) and a Convolutional Module sandwiched between two feed-forward modules. In order to adapt the sequence-to-sequence Conformer architecture for a solving classification task, a learnable classification token is prepended to the input embedding of the Conformer Encoder. $\tilde { X _ { i n } } = [ \tilde { X } , \tilde { X } _ { \mathrm { C L S } } ] , \tilde { X } \in \mathbb { R } ^ { T \times D ^ { \prime } } , \tilde { X } _ { C L S } \in$ $\mathbb { R } ^ { 1 \times D ^ { \prime } }$ is the input to the Conformer Encoder, where $[ \cdot , \cdot ]$ denotes the concatenation operation. Finally, the state of the classification token $\tilde { X } _ { C L S }$ at the output of the last Conformer block is fed to a linear layer to classify the input speech signal as bonafide or spoof. During training, the classification token $\tilde { X } _ { C L S }$ learns to capture the most relevant captures to distinguish a synthetic speech from a genuine one. Truong et al. [6] proposed integrating the channel representation head token into the temporal input token within the MHSA, called XLSR-Conformer+TCM model. Therefore, the model is capable of learning the temporal-channel dependencies from the input sequence. The architecture is similar to the XLSR-Conformer baseline, with the modifications only represented in the MHSA module. The XLSR-Conformer+TCM reduces the deepfake detection error rate by $2 6 \%$ relatively while being competitive on the logical access benchmark of ASVspoof 2021. In the aforementioned baseline architectures, we assume that the projection done by MLP may lead to suboptimal performance owing to the complexities in the high-dimensional contextualised representations. Therefore, we propose to replace the MLP with a simple GR-KAN layer to effectively approximate the functional mapping without an excessive computation overhead. Figure 1: Architecture of the baseline XLSR-Conformer model. The XLSR-Conformer+TCM baseline only modifies the MHSA module. # 3.2. XLSR-GRKAN-Conformer Model Our proposed architecture1 with XLSR-Conformer enhanced by GR-KAN is illustrated in Figure 2. Given the demonstrated superiority of KAN in function approximation and dimensionality reduction compared to traditional MLP architectures, along with the proven ability of GR-KAN in maintaining training stability as discussed in Sections 2.2 and 2.3, we propose to replace the conventional MLP projection from XLS-R features to Conformer Encoder with the GR-KAN implementation. Therefore, the input to the Conformer Encoder is now represented by $\tilde { X } _ { i n } ~ = ~ [ \tilde { X } _ { \mathrm { G R K A N } } , \tilde { X } _ { C L S } ] ~ \in ~ \mathbb { R } ^ { T \times D ^ { \prime } }$ , where $\tilde { X _ { \mathrm { G R K A N } } } = \mathrm { S e L U } ( \mathrm { G R } { \cdot } \mathrm { K } \dot { \mathrm { A N } } ( X ) )$ and $\boldsymbol { X } \in \mathbb { R } ^ { T \times D }$ is the same feature from an SSL model in Section 3.1, $\mathbf { G R - K A N } ( \cdot )$ is determined by Equation 6. Figure 2: Architecture of feature projector with GR-KAN. This architectural modification is primarily motivated by the inherent requirement to reduce the dimensionality of speech representations derived from SSL models before they can serve as input to the sequence-to-sequence models like Conformer - a scenario where KAN’s strengths become particularly advantageous. Furthermore, the GR-KAN’s inherent capability to mitigate the problem of activation magnitude fluctuations across network layers potentially minimizes the loss of valuable features extracted by the SSL model. These architectural improvements collectively enhance the performance of the baseline model, as will be demonstrated in the following sections. # 4. Experiments # 4.1. Datasets and Evaluation metrics The training and development datasets are sourced from the ASVspoof 2019 [28] logical access (LA) track, including bonafide speech and synthetic speech generated from two speech synthesis techniques: voice conversion and text-tospeech. We evaluate our model on the ASVspoof 2021 [29] logical access (LA) and deepfake (DF) corpus. Two known and eleven unknown attack types are included in the ASVspoof 2021 LA evaluation set. To replicate real-world scenarios, speech data is subjected to various codec and compression changes. Additionally, in contrast to the LA set, ASVspoof 2021 presented a new deepfake (DF) evaluation set with two more source data sets. Our primary evaluation metrics are the common-used equal error rate (EER) and minimum normalized tandem detection cost function (min t-DCF). # 4.2. Experimental Setup To ensure a fair comparison, we retained all configurations mentioned in baseline models [5, 6]. During training, the audio data are either trimmed or padded to approximately 4 seconds. We used the Adam optimizer with a learning rate of ${ { 1 0 } ^ { - 6 } }$ and a weight decay of $\mathrm { 1 \dot { 0 } ^ { - 4 } }$ . The final results are reported by averaging the top 5 best models on the validation set. Early stopping is applied if the validation loss does not improve after 7 epochs. We used the codebase in the baseline model and the official implementation of ${ \mathrm { G R } } { \mathrm { - K A N } } ^ { 2 }$ . Additionally, we applied the RawBoost data augmentation technique to the training data. The configuration and parameters of RawBoost [30] used in our experiment follow those in our baseline paper [6]. For evaluation on the LA and DF tracks, we trained two distinct SSD systems with separate RawBoost settings: combining linear and nonlinear convolutive noise with impulsive, signal-dependent additive noise strategies for the LA track, and stationary, signal-independent additive, randomly coloured noise for the DF track. # 4.3. Experimental Results Table 1 represents the results on ASVspoof21 LA and DF evaluation sets. Our proposed models, XLSR-GRKAN-Conformer and XLSR-GRKAN-Conformer $\mathbf { \Gamma } + \mathbf { \Gamma } \mathrm { T C M }$ , consistently outperform their corresponding baselines across all evaluation settings. Compared to XLSR-Conformer (Baseline 1), XLSRGRKAN-Conformer achieves a relative improvement of $1 . 8 7 \%$ in EER for 21LA (Fix) $( 1 . 0 7 1 . 0 5 )$ and $1 7 . 8 \%$ for 21LA (Var) $\cdot 1 . 0 7 \to 0 . 8 8 \rangle$ . Similarly, the min t-DCF score improves from 0.2136 to 0.2085, reflecting better calibration. Against XLSR-Conformer $^ +$ TCM (Baseline 2), our XLSR-GRKANConformer $+ \operatorname { T C M }$ model yields even stronger results, reducing EER by $5 4 . 0 \%$ for 21LA (Fix) and $6 7 . 1 \%$ for 21LA (Var), with a corresponding min t-DCF improvement of $1 2 . 7 \%$ . For the 21DF task, XLSR-GRKAN-Conformer outperforms Baseline 1 in both fixed-length $2 . 5 5 1 . 9 5$ , a $2 3 . 5 \%$ relative reduction) and variable-length $\cdot 2 . 5 5 2 . 3 1$ , a $9 . 4 \%$ relative reduction) settings. Our XLSR-GRKAN-Conformer $^ +$ TCM model also shows consistent improvement over Baseline 2, achieving a $7 . 3 \%$ relative reduction in EER for 21DF (Fix) and $5 . 4 \%$ for 21DF (Var). These results confirm that integrating GR-KAN into the Conformer-based system significantly enhances performance over the simple MLP, and the combination with TCM further amplifies these benefits. Table 1: Performance comparison with SOTA models on the ASVspoof 2021 LA and DF evaluation set using fixed-length (Fix) and variable-length (Var) utterance evaluation. The best result is bolded, dash denotes the results are unavailable, $\dagger$ denotes reproduced results. Table 2: Performance comparison with baseline models using variable-length data for both training and evaluation Table 2 presents a comparison between our model and the baseline model on both LA and DF tracks, using variable length data for both training and evaluation. Specifically, our model achieves a $5 9 . 7 \%$ relative reduction in EER for 21LA and a $1 3 . 7 \%$ improvement in min t-DCF $( 0 . 2 3 8 7 0 . 2 0 6 1 _ { . }$ ), while maintaining comparable performance on 21DF $2 . 5 6 \%$ $ 2 . 5 4 \%$ ). # 4.4. Ablation Study and Analysis Table 3: EER $( \% )$ result with different SSL models In this section, we assess the robustness of our proposed replacement of MLP with GR-KAN in working with features from various different SSL models of various sizes and architectures. We consider: (i) WavLM, a self-supervised model optimized for speech processing tasks; (ii) XLS-R, a cross-lingual variant of wav2vec 2.0 designed for multilingual speech representation; (iii) UniSpeech-SAT, which incorporates speakeraware training to enhance speaker and content modeling; and (iv) mHuBERT-147, a multilingual version of HuBERT trained on 147 languages for robust speech representation. For XLS-R, we use the fairseq implementation, other SSL models employ HuggingFace implementations. Table 3 demonstrates the robustness of replacing MLP with GR-KAN across different SSL models. On average, replacing MLP with GR-KAN results in a $2 9 . 1 \%$ relative reduction in EER across different SSL models and tasks, including WavLM, XLS-R, UniSpeech-SAT, and mHuBERT-147. The performance gains are particularly notable for high-EER models like UniSpeech-SAT and mHuBERT-147, where GR-KAN significantly enhances detection performance. These results confirm that GR-KAN is a more effective and generalizable alternative to MLP, making it a robust choice for various selfsupervised learning features.
Recent advancements in speech synthesis technologies have led to increasingly advanced spoofing attacks, posing significant challenges for automatic speaker verification systems. While systems based on self-supervised learning (SSL) models, particularly the XLSR-Conformer model, have demonstrated remarkable performance in synthetic speech detection, there remains room for architectural improvements. In this paper, we propose a novel approach that replaces the traditional Multi-Layer Perceptron in the XLSR-Conformer model with a Kolmogorov-Arnold Network (KAN), a novel architecture based on the Kolmogorov-Arnold representation theorem. Our results on ASVspoof2021 demonstrate that integrating KAN into the SSL-based models can improve the performance by 60.55% relatively on LA and DF sets, further achieving 0.70% EER on the 21LA set. These findings suggest that incorporating KAN into SSL-based models is a promising direction for advances in synthetic speech detection.
[ "cs.SD", "cs.CL", "eess.AS" ]
# 1 Introduction Despite significant advancements in language models (LMs), challenges persist in effectively handling the tail data, such as accommodating the needs of unseen language groups and addressing social biases (Gallegos et al., 2024; Guerreiro et al., 2023). This gap underscores the importance of research endeavors focused on refining LMs to better serve underrepresented populations, with individuals having language disorders being no exception. Anomia or word-retrieval difficulty stands as one of the most prevalent symptoms of People With Aphasia (PWA) (Laine and Martin, 2013). Target Item Godmother; Fairy who uses magic to help Cinderella go to the ball C1; Unseen C2; SPE Fairy who uses magic to Queen who has a wish to help Cinderella go to the ball help Cinderella go to the ball Circumlocution Queen who has a wish to help Cinderella Anomic individuals typically experience tip-ofthe-tongue phenomenon (Goodglass et al., 1976), where they are aware of the target item they want to convey but face difficulty in retrieving suitable words to articulate it. This difficulty frequently appears as ‘circumlocution’ 1, where individuals talk around the word. They rely on terms with paraphasic errors, or word substitutions, producing mayberelated or completely unrelated words (Friedman, 2015). In this study, we aim to design an LM that assists anomia patients by identifying their intended target item: For the given circumlocution of the individual, the model should identify the target item from the corpus. Surprisingly, while anomia significantly impacts the ability of individuals to engage in meaningful conversations (Code et al., 1999), there is no such LM specifically designed for assistance. Primitive works focused only on evaluating the LMs’ performance of intended target identification (Purohit et al., 2023; Salem et al., 2023b). Moreover, current LMs fail to suggest intended items, as will be shown in Sec. 4, further highlighting the need for improvements in this area. We start by specifying the two challenges from the anomic speech. • C1-Word retrieval failure: Individuals fail to recall the relevant terms and can only provide limited information about the target item, so the relevant terms are unseen in the circumlocution (Puttanna et al., 2021). • C2-Word retrieval error: Individuals make errors in word usage. As anomia is linked with a disorder of losing semantic knowledge about object concepts, it leads to the production of perturbed terms with semantically paraphasic errors (SPE) when attempting to name those concepts (Reilly et al., 2011; Harnish, 2018; Salem et al., 2023a; Binder et al., 2009). C1 is a challenge that is commonly faced in search, and thus relatively well-studied, aligning with the works revealing LMs’ vulnerability to incomplete inputs (Yu et al., 2021; Wang et al., 2023; Mackie et al., 2023). The example is shown in the left part of Fig. 1. The relevant description, such as ‘fairy’ or ‘go to the ball’, is required to identify the target item ‘Godmother’, but these terms are unseen in the circumlocution. A more unique challenge to anomia is C2: its inherent perturbation from SPE, which may cause the model to identify the wrong item. For example in the right part of Fig. 1, the individual uses words such as queen, which are perturbed about the target item. Such SPE terms are not semantically related to the target item and therefore do not assist in the model’s identification process; they may even be detrimental. Specifically, our pilot study found that roughly $40 \%$ of the terms in the circumlocution degrade the model performance. Therefore, anomia presents a complex challenge where we must navigate the unseen terms (C1) amidst the innate presence of SPE (C2). To this end, we introduce a novel augmentation approach involving gradient-based selection of augmentation target, called GradSelect. The goal is described in color on the left side in Fig. 2. We will delete the SPE terms while expanding the unseen terms. To delete the SPE terms (C2), we take an adversarial approach: By injecting more noise into the circumlocution, we robustify the model against diverse SPE terms. However, the challenge lies in that the inherently perturbed circumlocution easily loses its relevance to its original target after noise injection. Our contribution is to control the quality of data that ensures both diversity and relevance (Ash et al., 2019), by assessing the gradient value of each term to select the target for injecting noise. The process is described in the Fig. 2-(a). While we inject noise into high-gradient terms important to diversify the model’s representation, we prevent noise from affecting top-n gradient terms. This is based on our core finding that such terms are usually unperturbed keywords crucial for maintaining relevance to the correct item. From the denoised circumlocution, we then address the relevant but unseen terms (C1) by taking inspiration from pseudo-relevance feedback (PRF) (Croft et al., 2010; Lavrenko and Croft, 2017). The process follows Fig. 2-(b). To expand unseen terms (e.g. fairy) to seen terms, we augment the target items using the top retrieved items from the initial prediction. Here, we select the candidate items ranked higher than the target item. It stems from the observation that items with relevant terms exhibit a high gradient variances, which can be approximated by their relative rank (Zhou et al., 2022). Our exploration of this methodology begins with the Tip-of-the-Tongue dataset (Bhargav et al., 2022; Arguello et al., 2023), due to the scarcity of realworld datasets that precisely target anomia. Subsequently, we apply and validate our findings using real patient data from A-cinderella (Salem et al., 2023b), encompassing both the original dataset and our custom challenge set. The results demonstrate that GradSelect can improve identification accuracy by effectively controlling the quality of augmented data. # 2 Pilot Study on Circumlocution Terms This section discusses the existence and effect of each C1: relevant-unseen and C2: seen-SPE term in the circumlocution. # 2.1 Data Source and Models It is difficult to directly evaluate the effectiveness of our method on real-world anomic patients due to the scarcity of such datasets tailored to our specific task. Therefore, we conduct a pilot study on the TREC-TOT 2023 movie retrieval task (Arguello et al., 2023), which involves identifying target movie based on circumlocutions from individuals experiencing the ‘Tip of the Tongue’ phenomenon, a temporary form of anomia. The query often contains incomplete and dummy information from false memories, similar to our anomia scenario. Circumlocution (a) Gradient value not the mother oh the one that take care of them the good one the queen no … she oh the one that queen good take care had a wish … it’s a angel it's another word no she it’s another word mother angel wish it's not the angel Unperturbed C2; SPE terms: Keywords not the mother oh the one that take care of Noise Injection them the good one the queen no … she had a wish … it’s a angel it's another word it's not the angel (b) Retrieved Items C1; Unseen terms: not the mother oh the one that take care of #1 Fairy Gradient variance them the good one the queen no … she iht'asdnaotwtihseh a…ngite’lstaheanfgaierlyit'msagnioctahlehrewlpord Circumlocution Corpus #3 Godmother I #6 Prince Target Item Godmother We used lexical retriever BM25 (Robertson et al., 2009), and dense retriever co-Condenser (Gao and Callan, 2022) for the pilot study. BM25 is a traditional method of information retrieval (IR) that relies on exact term matching to find the target document from the query. On the other hand, dense retriever uses dense vector representations for queries and documents and relies on capturing semantic similarity rather than exact term matches. CoCondenser is one variant of dense retriever, which additionally pre-trains dense retriever with corpuslevel contrastive loss. We use the co-Condenser version from Kim et al. (2023) 2 and refer to it as ‘co-Condenser\*’. The normalized discounted cumulative gain (nDCG) score (Järvelin and Kekäläinen, 2002) is used to evaluate the performance of IR models. terms’ refer to the terms that appear in the circumlocution, while ‘unseen terms’ are those that do not appear in the circumlocution. The previous work found the lexical overlap between the circumlocution and the target item is lower in Tip-of-the-Tongue movie domain compared to conventional IR benchmarks (e.g. MSMarco (Nguyen et al., 2016): 0.55 vs Tip-of-theTongue movie (Bhargav et al., 2022): 0.25), and the same trend is also reported in the domain of book and music (Bhargav et al., 2023; Lin et al., 2023). 2.2 Relevant but Unseen Terms Furthermore, we compare the performances of BM25 on TREC-TOT with other datasets. Other datasets include MSMarco (Nguyen et al., 2016), and BEIR (Thakur et al., 2021), which is the collection of $1 8 \mathrm { I R }$ datasets. The results are reported in Table 1. The performance of BM25 on TREC-TOT is far behind the other dataset, which indicates the challenge of unseen terms. The definition and the presence of unseen terms are determined through the simple rule: ‘Seen Table 1: The BM25 performances that confirm the challenge of unseen terms. $\scriptstyle \mathrm { n D C G @ 1 0 }$ score is reported. 2.3 Seen but SPE Terms Table 2: Impact of random sentence deletion on model performance. We measure the change of nDCG on the test set of the TREC-TOT. We define ‘SPE terms’ as those that are not semantically related to the target item and explore how these terms undermine the process of target identification. In this paper, classifying which terms in the circumlocution are considered SPE is model-dependent. If a term is semantically related to the target item, it aids the model in correctly identifying the target item. Conversely, if a term has SPE, it either does not assist the model or could even hinder its performance. The existence of these perturbed terms is confirmed through both quantitative and qualitative methods. For the quantitative analysis, we evaluated the change in the performance of the semantic retriever, co-Condenser\*, by deleting part of the circumlocution. Our key point is that if the circumlocution suffers from the SPE terms that negatively impact the retrieval procedure, there will be instances where deleting these terms improves model performance. Specifically, we start by filtering the completely unrelated sentences in the circumlocution. TRECTOT dataset provides sentence-level annotations indicating whether a sentence is about the movie or not. The latter type of sentence includes social words (e.g., Thanks) or details about the context in which the movie was watched (e.g., with my $\boldsymbol { \mathscr { \sigma } }$ -year-old nephew). With these annotations, we filtered $1 8 . 7 \%$ of the sentences that are completely unrelated to the target movie. Then we measured how deleting each sentence in the circumlocution affects the performance of co-Condenser\* on the filtered TREC-TOT test set. The results in Table 2 imply that $4 0 . 1 \%$ of the sentences in the dataset include SPE terms, causing the model to predict the wrong item. We further qualitatively confirm that the SPE terms hallucinate the model to identify the wrong item. We selected some queries for which the model could not identify the correct target item and manually deleted some terms that we considered perturbed. The case study example is shown in Appx. B. By doing so, we enabled the retriever to rank the target item as the top-1, whereas with the SPE terms it was ranked lower than top-10. It implies that the SPE terms significantly drop the model performance, which implies that robustifying the model from such terms is necessary. # 3 Methods Our goal is to identify the intended target item from the item corpus given the circumlocution. The flow of GradSelect is depicted in Alg. 1. Leveraging the gradient as a proxy for the SPE, GradSelect selectively augments the dataset to make the semantics between circumlocution and the target item properly overlap. It is designed to denoise the SPE terms in the seen terms (Subsec 3.1), and enhance the circumlocution with unseen but relevant terms in our task (Subsec 3.2). # Algorithm 1: GradSelect Inputs: Circumlocution $C$ , item $I$ , training dataset $\mathcal { T } = \{ ( C , I _ { + } ) \}$ , teacher model $\Theta _ { t }$ , student model $\Theta _ { s }$ Output: Prediction for the intended target item # Circumlocution Augmentation 1: Initialize: $\Theta _ { t }$ with parameters $\theta$ 2: for $C \in { \mathcal { T } }$ do 3: $C = \{ c _ { 1 } , c _ { 2 } , . . . , c _ { i } , . . . , c _ { l } \}$ # Token list of $C$ 4: Compute the importance score $\mathrm { I M P } _ { c _ { i } }$ for each $c _ { i }$ using gradient values # Refer to Eq. (1) 5: Rank the tokens in $C$ in descending order of importance and select a subset $C [ \boldsymbol { m } : \boldsymbol { n } ]$ 6: Apply augmentation targeting $C [ m : n ]$ to generate Caug 7: Calculate the loss and update $\Theta _ { t }$ # Refer to Eq. (2) 8: end for # Item Augmentation 9: Initialize: new training set $\tau ^ { \prime }$ 10: for $C \in { \mathcal { T } }$ do 11: Get the ranked list $R$ by predicting the target item for $C$ with $\Theta _ { t }$ 12: $r _ { g } \gets \mathrm { R a n k }$ of the intended target item in $R$ 13: if $r _ { g } > k$ then 14: Add top- $k$ items $( C , I _ { + } ^ { \prime } )$ from $R$ to $\tau ^ { \prime }$ 15: end if 16: end for # Student Model Training 17: Initialize: $\Theta _ { s }$ with parameters $\theta$ 18: for $C \in \mathcal { T } \cup \mathcal { T } ^ { \prime }$ do 19: Repeat the subprocedure of Circumlocution Augmentation (lines 3-7) using $\Theta _ { s }$ 20: end for # Model Evaluation 21: Get predictions with $\Theta _ { s }$ on the test set 22: return Ensembled predictions from $\Theta _ { s }$ and $\Theta _ { t }$ # 3.1 Deleting Seen but SPE Terms We first target the seen but SPE terms by the adversarial approach: We expose the model to diverse forms of SPE terms, forcing it to learn terms that are crucial for accurate prediction, rather than relying on SPE terms. The challenge is that as the circumlocution is inherently perturbed by SPE, unconstrained noise injection might leave only SPE terms with no relevance to the target item. To overcome this, we propose to control the quality of the data by selecting the pool of the terms to be noised. We focus on augmenting data that are semantically diverse-covering a wide range of expressions-but still relevant-remaining pertinent to the target-which are the two measures of data quality (Ash et al., 2019; Zhao et al., 2022). # 3.1.1 Circumlocution Augmentation Our contribution is that we leverage the gradient value of each term to select the augmentation target, balancing diversity and relevance. In essence, both the SPE and relevant terms will significantly impact the model’s prediction of the item. The high gradient value of a term indicates such an impact. Our key finding is that during the train time, when the model is reliable, top gradient terms are the unperturbed keywords that are essential for predicting the accurate item, while SPE terms are likely to have a less pronounced impact on accurate prediction (Subsec. 4.3). Therefore, while we select to noise the terms that affect the model performance for diversity, we leave keywords untouched to preserve the semantic relevance. The selection algorithm is explained in Alg. 1 lines 1-8. Let $C$ be the input circumlocution with $[ c _ { i } ] _ { i = 1 } ^ { l }$ tokens and $C ^ { h } = [ c _ { i } ^ { h } ] _ { i = 1 } ^ { l }$ the input embedding matrix. We augment the train data from $C$ to $C _ { \mathrm { a u g } }$ by noising vectors along the token axis $( l )$ . We first compute the importance of each embedding by calculating the gradient magnitude following (Li et al., 2019; Wu et al., 2023). We sum up the scores across the hidden dimension in the embedding space to obtain a token-level importance score. The scoring function that assesses the importance of $i$ -th token, $\mathrm { I M P } _ { c _ { i } }$ , is $$ \mathrm { I M P } _ { c _ { i } } = \left. \frac { \partial \mathcal { F } _ { I } ( C ) } { \partial \pmb { c } _ { i } ^ { h } } \right. _ { 2 } ^ { 2 } $$ where $\mathcal { F } _ { I } ( C )$ represents the model’s prediction of relevance scores for the items. We rank all tokens based on their importance score $\mathrm { I M P } _ { c _ { i } }$ in descending order. The bottom $n$ tokens with the low gradients are mostly stop words (Wang et al., 2020) or don’t affect the semantics. By focusing on noising high-gradient terms $( C [ : n ] )$ , we effectively create diverse meanings of circumlocution. Then, we focus on preserving the top $m$ -terms for each circumlocution so that the key relevant terms are retained. Therefore, the resulting noise exclusively targets the tokens within the $C [ \boldsymbol { m } : \boldsymbol { n } ]$ range. Then, we augment the selected tokens in the circumlocution by injecting more noise. This noise can be introduced by either adding random noise to the token embedding (Zhou et al., 2021), or by deleting part of the embedding (Shen et al., 2020). The loss function utilized for training is defined following Cutoff (Shen et al., 2020): $$ \begin{array} { r } { \mathcal { L } = \mathcal { L } _ { \mathrm { c e } } ( C , I ) + \alpha \mathcal { L } _ { \mathrm { c e } } ( C _ { \mathrm { a u g } } , I ) } \\ { + \beta \mathcal { L } _ { \mathrm { j s } } ( C , C _ { \mathrm { a u g } } ) \qquad } \end{array} $$ where $\mathcal { L } _ { \mathrm { c e } }$ represents the cross-entropy loss and Ldivergence the Jensen-Shannon (JS) divergence consistency loss. # 3.2 Expanding Relevant but Unseen Terms While promising, there remains room for improvement in the relevant but unseen dimension of the circumlocution. To this end, we propose to augment the target items that may contain the unseen-relevant terms from top-ranked candidates. The idea aligns with pseudo-relevance feedback (PRF) (Croft et al., 2010; Lavrenko and Croft, 2017), which aggregates top retrieved items from an initial search to original query embedding to capture the unseen information for better query representation. Our distinction is that we address the risk of naive PRF that expands irrelevant items together (Li et al., 2022), by selectively distilling the relevant item with gradient variance. # 3.2.1 Target Item Augmentation We propose to leverage the variance of gradients to better distill the items with relevant terms. If one item has the relevant terms, it will be semantically similar to the target item but annotated as negatives, which exhibit high gradient variance (Agarwal et al., 2022; Zhou et al., 2022). Following SimANS (Zhou et al., 2022) that used the relative relevance scores to substitute the timeconsuming gradient variance computation, our idea is to selectively extract the potentially relevant items based on the relative rank concerning the original target item as described in Alg. 1 lines 9-16. We denote $C$ as the circumlocution and $I _ { + }$ as the corresponding target item. With the model $\Theta _ { t }$ trained on the original dataset ${ \mathcal T } = ( C , I _ { + } )$ , we first choose circumlocutions for which the target item is not ranked in the top- $\mathbf { \nabla } \cdot k$ retrieved items. We regard such circumlocutions as those requiring unseen terms. Then, we extract items $I _ { + } ^ { \prime }$ whose rank is higher than $k$ as additional target items that provide relevant terms. As a result, we get the new training set $\mathcal { T } ^ { \prime } = \{ ( C , I _ { + } ^ { \prime } ) \}$ . To leverage the new dataset, we distill the knowledge of items with relevant terms by selfknowledge distillation (KD) Furlanello et al. (2018). Self-KD is a technique where a neural network improves its performance by using its own outputs as training labels, serving as both the teacher and student models. Our procedure is explained in Alg. 1 lines 17-22. The student model is newly initialized with the same parameters before the teacher is trained on $\tau$ , and the training process outlined in SubSec. 4.3 is repeated using the augmented dataset $\smash { \tau \cup \mathcal { T } ^ { \prime } }$ . By learning from such a dataset, the student model can understand the relevance of target items that were originally unseen by the teacher model. The final prediction is the ensembled prediction of $\Theta _ { t }$ and $\Theta _ { s }$ following Furlanello et al. (2018). # 4 Experiments In this section, we first evaluate our strategy for the intermediary task that simulates item recall difficulties. Subsequently, we transition to use the utterances from real-world PWA datasets sourced from AphasiaBank (Forbes et al., 2012), which allows us to validate our findings in a more clinically relevant context. # 4.1 Known-item Retrieval Dataset and Evaluation Details RedditTOMT (Bhargav et al., 2022) and TREC-TOT 2023 (Arguello et al., 2023) are information retrieval benchmarks involving the retrieval of a target movie for which a user cannot recall a precise identifier. Compared to Reddit-TOMT, TREC-TOT 2023 (Arguello et al., 2023) consists of smaller queries and a huge corpus pool. We leverage it to verify the effectiveness of our approach with varying data sizes. Details on data statistics are in Appx. C. For evaluation, we build our backbone model co-Condenser\* from Kim et al. (2023), where coCondenser (Gao and Callan, 2021) which is pretrained in domain-specific corpus and the MaxSim operator handles documents exceeding the model’s token limit. We inject the noise by random deletion $( \mathbf { G r a d S e l e c t } _ { C } ^ { d } .$ ) and incrementally apply selective Self-KD (GradSelectd). We evaluated the test sets using three standard metrics in the retrieval, namely nDCG, Recall (R), and the Mean Reciprocal Rank (MRR). More details on data and settings are in Appx. C. Baselines We compare GradSelect with the approaches that our method builds upon. We use (1) Circumlocution augmentation: Cutoff (Shen et al., 2020) for original random deletion performance. Upon GradSelec $\mathbf { t } _ { C } ^ { d }$ , we implement (2) Item augmentation with Self-KD (Furlanello et al., 2018) methods with both soft and hard labels to highlight the benefits of our enhancement. Results The results on Reddit-TOMT and TRECTOT 2023 are presented in Table 3. GradSelect improves performance on both datasets, with both components of GradSelect proving effective. For the circumlocution augmentation, GradSelec $\mathbf { t } _ { C } ^ { d }$ outperforms Cutoff, achieving higher scores across all metrics. Moreover, the incremental application of our item augmentation further boosts GradSelect. Our strategy of selecting relevant items consistently achieves better performance compared to self-KD with soft and hard labels. # 4.2 Word completion for PWA Dataset and Evaluation Details ACinderella (Salem et al., 2023b), derived from the transcripts of AphasiaBank (Forbes et al., 2012), is a dataset designed for predicting intended words in cases of paraphasias., The dataset consists of utterances from patients with aphasia (PWA) recalling the story of Cinderella. Paraphasia is a broader symptom of anomia and refers to the production of unintended or incorrect words. Within this dataset, instances of paraphasia are addressed through the procedure: Upon an unintended word by a patient, the word is masked, followed by the masked word prediction to identify the intended word. We hypothesize that assessing our approach on this dataset will demonstrate its practical applicability. While anomia relates specifically to difficulties in word recall, the task at hand can also be viewed as relevant to anomias. In this context, the masked word serves as the target word that an anomic patient struggles to recall, with the model assisting in identifying the word. Additionally, we introduced an additional challenge set that enhances the dataset’s relevance to anomia. This is done by removing the “retracing” words near the masked item, which could potentially leak the answer during circumlocution 3. To simulate the scenario of anomia, the word must not be explicitly included in the circumlocution. Consequently, we delete any instance of the intended Table 3: nDCG $@$ 1000, $\scriptstyle \mathrm { n D C G } @ 1 0$ , Recall, and MRR on both Reddit-TOMT and TREC-TOT test sets. We incrementally add our component and compare it with the baselines. The best scores are highlighted in bold. word from the context surrounding the masked item to build the challenge set. The evaluation is done with a 10-fold crossvalidation setting. we use DPR as our backbone model. Following the setting of Subsec. 4.1, we also applied the MaxSim operator $( \mathbf { D P R } _ { m a x s i m } )$ and implemented ours on top of that. We evaluated both version of inserting noise by replacement (GradSelectr) and deletion (GradSelectd). The metrics are exact match (EM) and accuracy at 5 $( \operatorname { a c c } @ 5 )$ following Salem et al. (2023b). Baselines In addition to baselines in Subsec. 4.1, we consider strong augmentation baselines. We compare two types of circumlocution augmentation, replacement and deletion. EDA (Wei and Zou, 2019) introduce random insertions, replacements, and deletions together. For replacement (R), we use Random for random noise insertion, SMART (Jiang et al., 2020) for virtual adversarial noise insertion to create diverse data, VDA (Zhou et al., 2021) for virtual augmentation strategy that targets both semantic relevance and diversity. For deletion (D), in addition to Random (Cutoff), we use Large-loss that selects the augmented data with a higher loss than the original loss for diversity, as suggested by Yi et al. (2021); Kamalloo et al. (2022) 4. Conversely, Small-loss selects the data with the lower loss to exclude low-relevance samples (Han et al., 2018). Additionally, we include the performance of GPT-4 (OpenAI, 2023) for a comprehensive comparison. The details of baseline implementations are in Appendix. C.3 and C.4. Results The results are shown in Table 4. First, GradSelect improves performance over the $\mathrm { D P R } _ { m a x s i m }$ . The performance follows the same trend with Subsec. 4.1 $( \mathrm { D P R } _ { m a x s i m } < \mathrm { R a n d o m } <$ GradSelectC $\prec$ hard- and soft-label $\mathrm { K D < }$ GradSelect), across both noising methods and both datasets. This demonstrates that our incremental application over the original strategies provides significant improvements. Table 4: EM and acc $\textcircled { a } 5$ scores of GradSelect on ACinderella and comparisons. Second, our method holds better results compared to other circumlocution augmentation methods. The performance comparison reveals that EDA which uses both random deletion and replacement performs worse than ours. Moreover, neither of the approaches targeting only diversity, such as SMART and large-loss, nor those targeting only relevance such as small-loss has enhanced the performance from the random baseline. On the other hand, our strategy outperforms the baseline. This highlights leveraging both relevance and diversity is important in addressing anomia, which our strategy targets through the gradient-based selection. GradSelect outperforms VDA which also deals with both diversity and relevance. We attached the significance test results in Appx. C.5. Figure 3: Quantitative analysis of the gradient-based proxy. The $\mathbf { \boldsymbol { x } }$ -axis is the removed intervals based on gradient values and the y-axis is the EM score decrease. Finally, our model outperforms GPT-4 across both datasets. LLMs’ shortcomings become evident when confronted with the tail data, and suffer from perturbations (Qiang et al., 2024). In contrast, our model effectively mitigates such shortcomings, underscoring its superior performance and robustness in handling data with unseen and SPE terms. # 4.3 Analysis Gradient as a Proxy for Perturbance We validate the alignment between the gradient order we have derived and the degree of SPE for each term. We assess whether terms with high gradients do not have SPE and are indeed relevant so that their removal will significantly degrade performance. To investigate this, we compare the effects of removing terms with different gradient degrees. We rank the terms in the circumlocutions according to their gradient values, sorting them in descending order. We partition them into five intervals. We eliminate half of the terms within each interval and proceed with model training. The result is shown in Fig. 3. It illustrates a notable decrease in model performance when terms with top gradients are removed. This confirms our hypothesis that top gradients serve as proxies for the not-perturbed ones, necessary for the model to identify the same target item with the noisy input consistently. Relevance and Diversity of Gradient-based Selection Following Zhao et al. (2022), we verify the quality of augmented data from the perspective of relevance and diversity. For relevance, we measure the augmentation error rate. It measures the percentage of augmented data that gets a lower acc $\textcircled { a } 5$ score than the original data. For diversity, we calculate the average cosine distance of the data before and after the augmentation. Additionally, we evaluated our ablated version (GradSelec $\mathbf { t } _ { C } ^ { d }$ ) from the best hyperparameters and reported the performance. The results are demonstrated in Table 5. Setting value $m > 0$ promotes relevance by preventing the keywords from noise injection, and value $n > 0$ promotes the diverse sample by targeting terms that affect the model performance. Ours improves the performance of the model by balancing diversity and relevance. Table 5: Error rate, distance, and acc $\textcircled { a } 5$ results of ours compared to ablated on A-cinderella challenge set. The second-best score is underlined.
In this study, we investigate the potential of language models (LMs) in aiding patients experiencing anomia, a difficulty identifying the names of items. Identifying the intended target item from patient's circumlocution involves the two challenges of term failure and error: (1) The terms relevant to identifying the item remain unseen. (2) What makes the challenge unique is inherent perturbed terms by semantic paraphasia, which are not exactly related to the target item, hindering the identification process. To address each, we propose robustifying the model from semantically paraphasic errors and enhancing the model with unseen terms with gradient-based selective augmentation. Specifically, the gradient value controls augmented data quality amid semantic errors, while the gradient variance guides the inclusion of unseen but relevant terms. Due to limited domain-specific datasets, we evaluate the model on the Tip-of-the-Tongue dataset as an intermediary task and then apply our findings to real patient data from AphasiaBank. Our results demonstrate strong performance against baselines, aiding anomia patients by addressing the outlined challenges.
[ "cs.CL" ]
# 1 Introduction Theory of Mind (ToM), the capability to infer others’ mental states such as beliefs, desires, and intentions, is substantial for narrative comprehension (Premack and Woodruff, 1978; Apperly, 2010), where understanding charaters’ motivations and Book name: King Lear, Character: King Lear, Plot: King Lear decides to divide his kingdom Novel Plots among his three daughters based on their professions of love… Scenario: The royal court is assembled in a grand hall, filled with tension… 0。 Dialogues: King Lear [I am eager to hear Cordelia's profession of Conversations love. Surely it will outshine her sisters'.] Now, our joy… Belief Multiple Choice QA: Question: What does King Lear believe about Cordelia's profession of love? Options: A. He believes she is jesting and will eventually flatter him. B. He believes she is being honest and true to ToM-based herself. QA pairs C. He believes she is intentionally defying him out of spite. D. He believes she is confused and doesn't understand the situation. (King Lear, DesiresToHear, Cordelia's profession of love) (King Lear, BelievesAboutCordelia, Cordelia's profession of love will outshine her Characters sisters') (King Lear, IntendsTo, divide the kingdom based on daughters' professions of love) (King Lear, FeelsTowardsCordelia, disbelief and shock at Cordelia's refusal to flatter King Lear) ToM-based relation triples predicting their behaviors across extended storylines demands readers to construct rich mental models of each character. Specifically, ToM reasoning over prolonged narratives requires comprehensive contextualization of accumulated knowledge about characters’ backgrounds, personalities, and past experiences with their current circumstances (Davis, 1983; Harwood and Farrar, 2006; Apperly, 2010). When engaging with narratives, humans constantly construct and update models of characters’ mental states throughout the storyline, allowing for tracking psychological development and drawing connections between past experiences and present behaviors (Schneider, 2001). Such a temporal and evolutionary dimension of understanding, which is crucial for deep character comprehension, remains underexplored in computational approaches. Despite the increasing sophistication of Large Language Models (LLMs), research reveals significant limitations in their ToM reasoning capabilities, particularly in complex narrative contexts (Nematzadeh et al., 2018b; Gandhi et al., 2023; Tracey et al., 2022; Ullman, 2023; Zhou et al., 2025). Perspective-taking, which involves inferring what different characters perceive and know based on their unique vantage points, constitutes a critical aspect of human ToM reasoning (Davis, 1983; Harwood and Farrar, 2006). For readers of novels, perspective-taking is enriched by accumulated knowledge of characters’ backgrounds and past experiences. However, existing computational approaches to ToM reasoning often neglect this crucial dimension, instead focusing on isolated scenarios without sufficient global context (Wilf et al., 2023; Huang et al., 2024; Hou et al., 2024; Jung et al., 2024; Zhou et al., 2025). Prior ToM benchmarks like CharToM (Zhou et al., 2025) evaluate understanding through brief vignettes with limited character history. In light of the need for a benchmark that examines LLMs’ long-context ToM reasoning capabilities, we construct LitCharToM. LitCharToM is built upon classic literary narratives with characters that possess rich experiences developed over time through multiple interactions and evolving circumstances. This temporal dimension allows us to evaluate models’ ability to keep track of characters’ psychological evolutions, an essential capability for human-like narrative comprehension. To enhance LLMs’ ToM reasoning capabilities in long narratives, we propose EvolvTrip a novel framework for understanding fictional characters via temporal-aware structured mental state representation. While previous works such as PerceptToM and EnigmaToM (Jung et al., 2024; Xu et al., 2025) focus on visual perception, EvolvTrip models complex mental states informed by characters’ backgrounds, histories, and accumulated experiences. By encoding these perspective-aware mental states as structured triples within a temporal knowledge graph, EvolvTrip enable LLMs to reason about character psychology with contextual richness more closely resembling human ToM processes during narrative comprehension. Empirical results show that EvolvTrip brings significant performance improvements in long-context ToM reasoning to a range of LLMs. EvolvTrip is particularly effective in modeling ToM in extended-context scenarios with corss-plot narrative contents. Further, EvolvTrip is also effective when used with smaller LLMs, partially bridging the performance gap with larger architectures and demonstrating enhanced resilience when processing longer narratives. Our contributions can be summarised as follows: • We construct LitCharToM, a character-centric benchmark for evaluating ToM reasoning in literary contexts using classic novels. LitCharToM provides rich scenarios with complex social dynamics and long-term narrative dependencies, enabling comprehensive assessment of contextual understanding. • We introduce a perspective-aware temporal knowledge graph with entity-guided character linking. Our knowledge graph represents characters’ mental states as structured triples tagged with temporal markers and connects character instances across narrative segments. • We propose EvolvTrip, a neuro-symbolic approach for enhancing ToM reasoning. EvolvTripincorporates structured representation of characters’ evolving mental states, which significantly improves LLMs’ performance on character-centric ToM reasoning that require deep contextual understanding. # 2 Related Work # 2.1 Theory of Mind Evaluation in LLMs Numerous benchmarks have been developed to evaluate ToM capabilities in LLMs by simulating psychological and cognitive experimental designs. Early benchmarks like ToMi (Nematzadeh et al., 2018a) focused on evaluating models’ ability to reason about basic beliefs. This foundation was extended by SocialIQA (Sap et al., 2019b), which specifically tests social and emotional intelligence. More advanced ToM reasoning has been explored in Hi-ToM (Wu et al., 2023), which assesses higher-order recursive reasoning about others’ beliefs. Recent benchmarks have diversified the evaluation contexts, with FANToM (Kim et al., 2023) stress-testing ToM within conversational settings and OpenToM (Xu et al., 2024) incorporating explicit personality traits and preferences. Comprehensive evaluation platforms like ToMBench (Chen et al., 2024) encompass multiple tasks that target 31 distinct social cognitive abilities. Despite their wide coverage, these benchmarks share common limitations. Most rely heavily on pre-determined rules and templates for scenario generation (Nematzadeh et al., 2018a; Le et al., 2019), which can introduce predictable patterns and spurious correlations, potentially leading to the Clever Hans phenomenon (Lapuschkin et al., 2019). Moreover, they typically feature brief, isolated scenarios that fail to capture the complexity of social relationships and interactions that characterize real-world ToM reasoning, overlooking the importance of comprehensive contextual understanding that spans extended narrative timeframes. Figure 2: Our ToM-based character understanding pipeline: (1) Source data collection from CoSER Dataset including novel plots and character conversations with [Thought] and (Action) annotations, (2) GPT-4o generation of belief, desire, emotion, and intention QA pairs with two-stage verification, (3) Extraction of BelievesAbout, DesiresFor, FeelsTowards, and IntendsTo relation triples, and (4) Temporal knowledge graph construction by integrating previous and current plot information. Character Understanding in Narrative Comprehension There has been consistent efforts in character-centric narrative understanding, with works like NarrativeQA (Kocˇisky\` et al., 2018), LitBank (Bamman et al., 2019; Sims et al., 2019; Bamman et al., 2020), LiSCU (Brahman et al., 2021), and PeQA (Xu et al., 2022) developing questionanswering frameworks for longer narrative contexts. These approaches primarily evaluate surfacelevel comprehension rather than deeper understanding of characters’ mental states and psychological development. The psychology literature consistently shows that human readers construct rich mental models of fictional characters’ beliefs and intentions (Apperly, 2010), tracking these mental states across extended narratives. This cognitive process relies heavily on accumulated knowledge of characters’ backgrounds, histories, and evolving psychological states—aspects that most computational approaches have not adequately modeled. Knowledge Representation for ToM Reasoning Knowledge bases for representing mental states and social reasoning have evolved from generalpurpose semantic networks like ConceptNet (Liu and Singh, 2004) to more specialized representations. Event2Mind (Rashkin et al., 2018) introduced event-based knowledge graphs that capture characters’ intentions and reactions, while ATOMIC (Sap et al., 2019a) models if-then relationships for simple social events. Recent approaches include entity state tracking in procedural contexts (Tandon et al., 2020; Zhang et al., 2023), though these have not been specifically applied to character understanding in extended narratives. In the mean time, Neural knowledge bases like COMET is developed (Bosselut et al., 2019), which generate commonsense inferences about social situations, but lack the temporal depth needed for character tracking across narrative arcs. # 3 Dynamic Character Understanding through Evolving Mental State Triplets We introduce the construction of the LitCharToM benchmark and the design of EvolvTrip framework for evaluating Theory-of-Mind comprehension in literary narratives. EvolvTrip (Evolving Triplets) is a structured knowledge representation approach that captures the dynamic evolution of character mental states across narrative arcs. Following the pipeline illustrated in Figure 2, our construction methodology encompasses four integrated phases: (1) source data collection, (2) ToMbased question generation, (3) character relation triple extraction, and (4) temporal knowledge graph construction. # 3.1 LitCharToM: Source Data Collection LitCharToM builds upon the CoSER dataset1 (Wang et al., 2025), which comprises 81 literary works from project Gutenberg. CoSER provides rich character-centric data including plot summaries, character profiles, and multi-dimensional dialogues. We further selected 20 books from CoSER that exhibit sophisticated character development, complex interpersonal dynamics, and narrative depth spanning multiple scenes. See Appendix A for detailed statistics of LitCharToM. We base our LitCharToM on CoSER dataset because of its multi-dimensional representation of character dialogue, which includes verbal speech (direct communications), actions (physical behaviors denoted by parentheses), and thoughts (internal cognitive processes denoted by brackets). This tripartite structure offers particular value for ToM analysis, as each dimension maps differently to mental state categories. Actions reveal intentions and emotions (e.g., nods firmly suggests deliberate agreement). Thoughts provide rich access to all four ToM dimensions, with strongest mapping to emotions (e.g., [I’m terrified]), followed by desires (e.g., [I wish I could leave]), intentions (e.g., [I’ll confront him tomorrow]), and beliefs (e.g., [He’s lying to everyone]). This structured representation enables EvolvTrip to extract both explicit and implicit mental states from complementary sources, where thoughts reveal deeper affective and cognitive layers, and actions reflect behavioral manifestations of internal states. # 3.2 LitCharToM: ToM-Based Question Generation For each character participating in each plot’s dialogues, we systematically generate ToM questions across four dimensions: belief, emotion, intention, and desire. We employ GPT-4o (OpenAI, 2024) to construct multiple-choice questions requiring reasoning about characters’ mental states. For each ToM dimension, GPT-4o examines multiple sources of information: the current plot content, conversation scenario, character dialogues (including the thoughts of current character), and summaries of previous plot segments. This comprehensive context allows the model to identify salient mental states across narrative progression, formulating complex questions with four answer options: one correct answer grounded in the character’s depicted psychology and three plausible distractors representing common misinterpretations. To ensure accuracy, we implement a two-stage verification process: initially, GPT-4o verifies all generated questions for logical consistency, clarity, and the presence of a single unambiguously correct answer. Subsequently, human annotators assess accuracy, difficulty level, and appropriateness. Notably, over $90 \%$ of the entries are valid at the first generation attempt2, demonstrating the effectiveness of our generation methodology. Questions identified as problematic during either verification stage undergo refinement or complete regeneration, followed by an additional verification process. # 3.3 EvolvTrip: Mental State Triple Extraction To provide a structured representation of characters’ mental activities, EvolvTrip extracts charactercentric mental state triples following a subjectpredicate-object structure. The subject corresponds to the character, the predicate indicates the ToM dimension (e.g., BelievesAbout, FeelsTowards, IntendsTo, DesiresFor), and the object constitutes the content of the mental state. For each narrative plot, we employ GPT-4o to generate triples by analyzing the multi-dimensional dialogue data through a perspective-taking lens, which distinguishes between information accessible to each character versus information they cannot know. This perspective-aware approach examines character thoughts that directly reveal mental states, character actions that imply underlying mental states, and verbal dialogues containing explicit statements about beliefs, emotions, intentions, or desires. By identifying events observable by a given character and excluding unobservable ones, this approach significantly alleviates the reasoning burden for LLMs, enabling more accurate mental state attribution. Predicates are specified to provide precise context, such as using BelievesAbout to indicate a belief concerning another entity or FeelsTowards to denote an emotion directed at someone. For triple verification, GPT-4o conducts initial assessment of all generated triples for logical consistency with the narrative context, adherence to the correct triple format, and appropriate perspective constraints (ensuring characters only form mental states about information they could plausibly access). We then randomly select $40 \%$ of triples for human expert verification, assessing their accuracy and relevance to the characters’ depicted mental states. Triples identified as incorrect during either verification stage are regenerated and re-verified, ensuring high-quality knowledge representation. Detailed dataset quality statistics are provided in Appendix A.2. # 3.4 EvolvTrip: Temporal Knowledge Graph Construction The core innovation of EvolvTrip is capturing the dynamic nature of character psychology throughout narratives. We construct a temporal knowledge graph where nodes represent characters or significant events, edges embody the generated triples with labels specifying the ToM dimension, and temporal tags associate each triple with specific plot numbers. Each triple is tagged with the plot segment in which the mental state appears, enabling systematic tracking of psychological development. We establish inter-plot links between instances of the same character across different segments, facilitating analysis of how characters’ mental states evolve in response to narrative developments. To maintain psychological consistency, we provide GPT-4o the past mental states of each character when generating triples for new plot segments. This approach enables it to build upon established psychological profiles. For similar mental states concerning the same subject, EvolvTrip combines or refines them based on new information. When new information contradicts earlier states, we update the triples to reflect character development, clearly indicating the temporal transition to demonstrate how the character’s perspective has evolved throughout the narrative. This temporally linked representation provides a comprehensive view of character psychology that evolves organically through the narrative, capturing the dynamic nature of beliefs, emotions, intentions, and desires as they transform in response to story events. # 4 Experiments # 4.1 Setup We conduct experiments on our multiple-choice Theory-of-Mind benchmark comprising 2,539 questions spanning four dimensions: belief, emotion, intention, and desire. All experiments use a standardized prompt template as detailed in Appendix B. To investigate models’ ability to leverage contextual information for ToM comprehension, we vary the context lengths of story plots provided to the models, examining their performance with and without the structured triple representations generated by EvolvTrip . For each question, models are evaluated in two settings: (1) standard prompting with only the narrative context and question, and (2) EvolvTrip -enhanced prompting where relevant mental state triples are included as additional context. This allows us to assess the impact of EvolvTrip’s explicit structured knowledge on models’ ToM reasoning capabilities. Evaluated LLMs. We evaluate a diverse set of LLMs as our baselines, including GPT-4o and GPT-4o-mini (OpenAI, 2023), accessed through official APIs. For the open-sourced LLMs, we include DeepSeek-R1 (DeepSeek-AI, 2025), Qwen2.5-72B-Instruct (Yang et al., 2024), Llama3.3-72B-Instruct (Dubey et al., 2024), DSR1-Dist-Qwen-32B (DeepSeek-R1 distilled into a 32B Qwen architecture) (DeepSeek-AI, 2025), Qwen3-32B (Yang et al., 2025), Qwen2.5-32BInstruct (Yang et al., 2024), InternLM2.5-20BChat(Cai et al., 2024), Qwen3-14B (Yang et al., 2025), Qwen2.5-14B (Yang et al., 2024), DSR1-Dist-Qwen-14B (DeepSeek-AI, 2025), Qwen3- 8B (Yang et al., 2025), Qwen2.5-7B-Instruct (Yang et al., 2024), InternLM3-8B-Instruct (Cai et al., 2024), and InternLM2.5-7B-Chat (Cai et al., 2024). For each model, we test both a standard version and a triple-enhanced version (denoted as "w Triple") that incorporates structured mental state triples into the context. All models are accessed either through official APIs or using weights downloaded from Hugging Face repositories, in compliance with their terms of use. # 4.2 Out-of-Distribution Evaluation To evaluate the generalizability of EvolvTrip to new literary works, we conducted experiments using five books as an out-of-distribution (OOD) test set, comprising 779 questions across the four ToM dimensions. This setup allowed us to assess how well models augmented with EvolvTrip ’s structured representations can transfer their ToM reasoning capabilities to entirely new narrative contexts not seen during training or development. For these experiments, we selected three representative smallerscale models: Qwen3-8B, Qwen2.5-7B-Instruct, and InternLM3-8B-Instruct. We evaluated each model in two distinct settings: Table 1: Multichoice QA accuracy scores of LLMs. The input to LLMs is the current story plots. w / Triple indicates the prompt includes the character’s ToM-based relation triples. Best performance of each model is bolded Direct Inference. Models were provided with the story plot, conversation scenario description, and question without any fine-tuning. We tested both standard inference (using only narrative content) and EvolvTrip -enhanced inference (including relevant mental state triples in the context). EvolvTrip-based Fine-Tuning. Models were finetuned on training data where the output format first presented the relevant character relation triples followed by the correct answer option. This structured approach was designed to help models learn the explicit connections between narrative information, character mental states, and appropriate answers. The EvolvTrip -based fine-tuning approach offers a significant advantage: it guides models to first extract structured knowledge representations before generating answers, effectively decomposing the complex ToM reasoning process into more manageable steps. By learning to generate structured triples as an intermediate step, models develop a more robust understanding of character psychology that transfers more effectively to new literary contexts. Results from these experiments are presented in Table 3, demonstrating how the EvolvTrip -based approaches impact performance across different model architectures when faced with previously unseen literary works. We provide the training examples in Appendix C. # 5 Results and Analysis # 5.1 Performance on ToM Reasoning Tasks The experimental results demonstrate the significant impact of EvolvTrip ’s structured mental state triples across various ToM reasoning dimensions. As shown in Table 1, the integration of triple representations consistently enhances model performance, with improvements observed across all model scales and ToM dimensions. With an average prompt length of 2,500 tokens for both standard and EvolvTrip -enhanced inputs, these improvements highlight the value of structured representation rather than simply increasing context length. Table 2: Multichoice QA performances of LLMs in terms of accuracy. The input to LLMs is the current story plots and previous plots’ summary. Best performance of each model is bolded. Table 3: Ablation study results on out-of-distribution testsets across four ToM dimensions. "w Triple" indicates models that use structured triple representation in either inference or training. The EvolvTrip-enhanced approach yields substantial performance gains for all evaluated models. DeepSeek-R1 shows the most dramatic improvement, increasing from $7 0 . 7 4 \%$ to $7 4 . 4 4 \%$ when incorporating EvolvTrip triples. Similarly, Qwen3-14B experiences a remarkable improvement of $5 . 4 2 \%$ , from $5 8 . 0 4 \%$ to $6 3 . 4 6 \%$ . Even top-performing models like GPT-4o benefit from EvolvTrip integration, improving from $7 0 . 8 6 \%$ to $7 3 . 3 6 \%$ . These consistent enhancements highlight the fundamental value of EvolvTrip ’s structured knowledge representations in ToM reasoning tasks. The impact of EvolvTrip is particularly pronounced for emotion recognition, where models show the largest accuracy gains. InternLM2.5-7BChat improves by $2 . 0 0 \%$ in emotion accuracy, from $6 5 . 1 8 \%$ to $6 7 . 1 8 \%$ , while Qwen3-14B sees a remarkable improvement of $6 . 2 0 \%$ , from $5 9 . 8 1 \%$ to $6 6 . 0 1 \%$ . This suggests that EvolvTrip ’s explicit structured representations effectively bridge the gap between textual cues and the abstract emotional states they signify. Notably, EvolvTrip integration partially mitigates the performance gap between smaller and larger models. While Qwen3- 32B outperforms Qwen3-8B by $2 . 7 5 \%$ in standard settings, this gap narrows when both incorporate EvolvTrip triples. This demonstrates how EvolvTrip ’s structured knowledge representations can enhance the reasoning capabilities of smaller models, making sophisticated ToM reasoning more accessible. EvolvTrip integration also helps balance performance across different ToM dimensions. Without triples, models typically perform best on Intention and worst on Belief, with considerable performance disparities. EvolvTrip integration narrows these gaps, providing more consistent reasoning capabilities across all mental state dimensions. For instance, DeepSeek-R1’s performance spread between its strongest and weakest dimensions decreases from $4 . 4 1 \%$ to $4 . 1 1 \%$ with EvolvTrip enhancement. # 5.2 Performance with Extended Context Table 2 presents model performance when the input is expanded to include both current story plots and summaries of previous plots, increasing the average prompt length to approximately 4,500 tokens. This extended context scenario reveals important insights about model behavior with longer narratives and the continued effectiveness of EvolvTrip integration under more challenging conditions. The addition of previous plot summaries creates a more challenging reasoning environment for all models, with notable performance decreases compared to the current-plot-only scenario in Table 1. For example, Qwen3-14B’s accuracy drops substantially from $5 8 . 0 4 \%$ to $5 4 . 5 1 \%$ , and Qwen3-8B declines from $5 7 . 4 0 \%$ to $5 2 . 7 5 \%$ . This performance degradation reflects the well-known challenge LLMs face with longer contexts, where relevant information must be identified within a larger text span. The integration of EvolvTrip ’s structured mental state triples provides substantial benefits in this more challenging extended context scenario. DS-R1-DistQwen-14B shows a dramatic improvement from $56 . 2 5 \%$ to $6 1 . 0 8 \%$ , while InternLM3-8B-Instruct improves from $5 3 . 2 1 \%$ to $5 7 . 3 8 \%$ . This demonstrates the robust utility of EvolvTrip ’s structured representations in guiding model attention toward relevant character information across longer narrative spans. The benefits of EvolvTrip integration are particularly evident for smaller models, which typically struggle more with extended contexts. Models like Qwen2.5-7B-Instruct show substantial improvements with triples, suggesting that EvolvTrip ’s explicit structured knowledge helps these models overcome their inherent limitations in handling longer texts. Performance patterns across ToM dimensions remain consistent with the current-plotonly scenario, with Emotion and Intention dimensions yielding higher accuracy than Belief and Desire dimensions. EvolvTrip integration helps narrow these dimensional performance gaps, providing more balanced reasoning capabilities. # 5.3 Ablation Study To assess the generalizability of EvolvTrip , we conducted an ablation study using five books as outof-distribution test cases. These books were not part of the training data, allowing us to evaluate how well models transfer ToM reasoning capabilities to entirely new literary contexts. As shown in Table 3, we compare two inference strategies across three model architectures. In the Direct Inference setting, models show modest performance on ToM reasoning tasks, with EvolvTrip -enhanced inference consistently outperforming standard inference across all dimensions. This confirms that EvolvTrip ’s structured triple representation provides effective scaffolding for ToM reasoning even without task-specific training. The Fine-Tuning section demonstrates significantly stronger results, where models were trained on data consisting of questions, EvolvTrip ’s structured mental state triples, and answers. This triple-based training approach yields substantial improvements across all models and dimensions. For example, Qwen3-8B improves from $5 4 . 2 5 \%$ to $5 8 . 1 2 \%$ average accuracy when fine-tuned with EvolvTrip triples, and InternLM3- 8B-Instruct shows the most dramatic improvement, reaching $5 8 . 6 7 \%$ average accuracy. The consistent performance gains across different architectures highlight the transferability of EvolvTrip to novel literary works. Notably, EvolvTrip fine-tuned models maintain balanced performance across all four ToM dimensions, suggesting that the triple-based representation effectively bridges the gap between different types of mental state reasoning.
A compelling portrayal of characters is essential to the success of narrative writing. For readers, appreciating a character's traits requires the ability to infer their evolving beliefs, desires, and intentions over the course of a complex storyline, a cognitive skill known as Theory-of-Mind (ToM). Performing ToM reasoning in prolonged narratives requires readers to integrate historical context with current narrative information, a task at which humans excel but Large Language Models (LLMs) often struggle. To systematically evaluate LLMs' ToM reasoning capability in long narratives, we construct LitCharToM, a benchmark of character-centric questions across four ToM dimensions from classic literature. Further, we introduce EvolvTrip, a perspective-aware temporal knowledge graph that tracks psychological development throughout narratives. Our experiments demonstrate that EvolvTrip consistently enhances performance of LLMs across varying scales, even in challenging extended-context scenarios. EvolvTrip proves to be particularly valuable for smaller models, partially bridging the performance gap with larger LLMs and showing great compatibility with lengthy narratives. Our findings highlight the importance of explicit representation of temporal character mental states in narrative comprehension and offer a foundation for more sophisticated character understanding. Our data and code are publicly available at https://github.com/Bernard-Yang/EvolvTrip.
[ "cs.CL" ]